CVE-2024-45024: Vulnerability in Linux Linux
In the Linux kernel, the following vulnerability has been resolved: mm/hugetlb: fix hugetlb vs. core-mm PT locking We recently made GUP's common page table walking code to also walk hugetlb VMAs without most hugetlb special-casing, preparing for the future of having less hugetlb-specific page table walking code in the codebase. Turns out that we missed one page table locking detail: page table locking for hugetlb folios that are not mapped using a single PMD/PUD. Assume we have hugetlb folio that spans multiple PTEs (e.g., 64 KiB hugetlb folios on arm64 with 4 KiB base page size). GUP, as it walks the page tables, will perform a pte_offset_map_lock() to grab the PTE table lock. However, hugetlb that concurrently modifies these page tables would actually grab the mm->page_table_lock: with USE_SPLIT_PTE_PTLOCKS, the locks would differ. Something similar can happen right now with hugetlb folios that span multiple PMDs when USE_SPLIT_PMD_PTLOCKS. This issue can be reproduced [1], for example triggering: [ 3105.936100] ------------[ cut here ]------------ [ 3105.939323] WARNING: CPU: 31 PID: 2732 at mm/gup.c:142 try_grab_folio+0x11c/0x188 [ 3105.944634] Modules linked in: [...] [ 3105.974841] CPU: 31 PID: 2732 Comm: reproducer Not tainted 6.10.0-64.eln141.aarch64 #1 [ 3105.980406] Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-4.fc40 05/24/2024 [ 3105.986185] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 3105.991108] pc : try_grab_folio+0x11c/0x188 [ 3105.994013] lr : follow_page_pte+0xd8/0x430 [ 3105.996986] sp : ffff80008eafb8f0 [ 3105.999346] x29: ffff80008eafb900 x28: ffffffe8d481f380 x27: 00f80001207cff43 [ 3106.004414] x26: 0000000000000001 x25: 0000000000000000 x24: ffff80008eafba48 [ 3106.009520] x23: 0000ffff9372f000 x22: ffff7a54459e2000 x21: ffff7a546c1aa978 [ 3106.014529] x20: ffffffe8d481f3c0 x19: 0000000000610041 x18: 0000000000000001 [ 3106.019506] x17: 0000000000000001 x16: ffffffffffffffff x15: 0000000000000000 [ 3106.024494] x14: ffffb85477fdfe08 x13: 0000ffff9372ffff x12: 0000000000000000 [ 3106.029469] x11: 1fffef4a88a96be1 x10: ffff7a54454b5f0c x9 : ffffb854771b12f0 [ 3106.034324] x8 : 0008000000000000 x7 : ffff7a546c1aa980 x6 : 0008000000000080 [ 3106.038902] x5 : 00000000001207cf x4 : 0000ffff9372f000 x3 : ffffffe8d481f000 [ 3106.043420] x2 : 0000000000610041 x1 : 0000000000000001 x0 : 0000000000000000 [ 3106.047957] Call trace: [ 3106.049522] try_grab_folio+0x11c/0x188 [ 3106.051996] follow_pmd_mask.constprop.0.isra.0+0x150/0x2e0 [ 3106.055527] follow_page_mask+0x1a0/0x2b8 [ 3106.058118] __get_user_pages+0xf0/0x348 [ 3106.060647] faultin_page_range+0xb0/0x360 [ 3106.063651] do_madvise+0x340/0x598 Let's make huge_pte_lockptr() effectively use the same PT locks as any core-mm page table walker would. Add ptep_lockptr() to obtain the PTE page table lock using a pte pointer -- unfortunately we cannot convert pte_lockptr() because virt_to_page() doesn't work with kmap'ed page tables we can have with CONFIG_HIGHPTE. Handle CONFIG_PGTABLE_LEVELS correctly by checking in reverse order, such that when e.g., CONFIG_PGTABLE_LEVELS==2 with PGDIR_SIZE==P4D_SIZE==PUD_SIZE==PMD_SIZE will work as expected. Document why that works. There is one ugly case: powerpc 8xx, whereby we have an 8 MiB hugetlb folio being mapped using two PTE page tables. While hugetlb wants to take the PMD table lock, core-mm would grab the PTE table lock of one of both PTE page tables. In such corner cases, we have to make sure that both locks match, which is (fortunately!) currently guaranteed for 8xx as it does not support SMP and consequently doesn't use split PT locks. [1] https://lore.kernel.org/all/1bbfcc7f-f222-45a5-ac44-c5a1381c596d@redhat.com/
AI Analysis
Technical Summary
CVE-2024-45024 is a vulnerability in the Linux kernel's memory management subsystem, specifically related to the handling of huge pages (hugetlb) and page table locking mechanisms. The issue arises from a mismatch in locking protocols when the kernel walks page tables for huge pages that span multiple page table entries (PTEs), particularly on architectures like arm64 with 64 KiB hugetlb folios and 4 KiB base page sizes. The vulnerability is rooted in the fact that the Get User Pages (GUP) code, which walks page tables to pin user pages in memory, uses a different locking approach (pte_offset_map_lock) compared to the locking used when modifying these page tables (mm->page_table_lock) under certain kernel configurations such as USE_SPLIT_PTE_PTLOCKS or USE_SPLIT_PMD_PTLOCKS. This discrepancy can lead to race conditions or inconsistent locking states, potentially causing kernel warnings, crashes, or memory corruption. The vulnerability was demonstrated with a kernel warning and call trace indicating a failure in try_grab_folio(), a function involved in acquiring locks on folios during page table walking. The fix involves harmonizing the locking strategy by making huge_pte_lockptr() use the same page table locks as the core memory management code, adding a ptep_lockptr() function to obtain the correct PTE lock, and handling various kernel configurations and architectures correctly. This patch ensures consistent locking across different page table levels and huge page mappings, preventing race conditions and instability. The vulnerability affects Linux kernel versions prior to the patch and is relevant for systems using huge pages, especially on ARM64 and other architectures with complex page table layouts. No known exploits are reported in the wild as of the publication date.
Potential Impact
For European organizations, this vulnerability poses a risk primarily to servers and systems running Linux kernels with huge page support enabled, particularly those on ARM64 architectures or other affected platforms. Huge pages are commonly used in high-performance computing, database servers, and virtualization environments to improve memory management efficiency. Exploitation could lead to kernel crashes or memory corruption, resulting in denial of service (DoS) conditions that disrupt critical services. While there is no evidence of remote code execution or privilege escalation directly linked to this flaw, the instability caused by inconsistent locking could be leveraged as part of a broader attack chain or cause operational disruptions. Organizations relying on Linux-based infrastructure for cloud services, container orchestration, or virtualization may experience service interruptions or degraded performance. The impact is heightened in environments where uptime and reliability are critical, such as financial institutions, telecommunications, and public sector services. Additionally, debugging and recovery from kernel crashes can incur operational costs and potential data loss if not properly managed.
Mitigation Recommendations
European organizations should prioritize updating their Linux kernel to the patched versions that address CVE-2024-45024 as soon as they become available from their distribution vendors. Given the complexity of the issue, relying on vendor patches is the most reliable mitigation. For environments where immediate patching is not feasible, administrators should consider disabling huge page support if it is not essential, thereby reducing exposure. Monitoring kernel logs for warnings related to try_grab_folio or page table locking anomalies can help detect attempts to trigger the vulnerability. Additionally, organizations should ensure that their kernel configurations avoid split page table locks unless necessary, as this configuration contributes to the vulnerability. For critical systems, implementing robust kernel crash recovery and memory integrity monitoring can mitigate operational impacts. Finally, maintaining strict access controls and limiting untrusted code execution on affected systems reduces the risk of exploitation attempts.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain, Poland, Belgium
CVE-2024-45024: Vulnerability in Linux Linux
Description
In the Linux kernel, the following vulnerability has been resolved: mm/hugetlb: fix hugetlb vs. core-mm PT locking We recently made GUP's common page table walking code to also walk hugetlb VMAs without most hugetlb special-casing, preparing for the future of having less hugetlb-specific page table walking code in the codebase. Turns out that we missed one page table locking detail: page table locking for hugetlb folios that are not mapped using a single PMD/PUD. Assume we have hugetlb folio that spans multiple PTEs (e.g., 64 KiB hugetlb folios on arm64 with 4 KiB base page size). GUP, as it walks the page tables, will perform a pte_offset_map_lock() to grab the PTE table lock. However, hugetlb that concurrently modifies these page tables would actually grab the mm->page_table_lock: with USE_SPLIT_PTE_PTLOCKS, the locks would differ. Something similar can happen right now with hugetlb folios that span multiple PMDs when USE_SPLIT_PMD_PTLOCKS. This issue can be reproduced [1], for example triggering: [ 3105.936100] ------------[ cut here ]------------ [ 3105.939323] WARNING: CPU: 31 PID: 2732 at mm/gup.c:142 try_grab_folio+0x11c/0x188 [ 3105.944634] Modules linked in: [...] [ 3105.974841] CPU: 31 PID: 2732 Comm: reproducer Not tainted 6.10.0-64.eln141.aarch64 #1 [ 3105.980406] Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-4.fc40 05/24/2024 [ 3105.986185] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 3105.991108] pc : try_grab_folio+0x11c/0x188 [ 3105.994013] lr : follow_page_pte+0xd8/0x430 [ 3105.996986] sp : ffff80008eafb8f0 [ 3105.999346] x29: ffff80008eafb900 x28: ffffffe8d481f380 x27: 00f80001207cff43 [ 3106.004414] x26: 0000000000000001 x25: 0000000000000000 x24: ffff80008eafba48 [ 3106.009520] x23: 0000ffff9372f000 x22: ffff7a54459e2000 x21: ffff7a546c1aa978 [ 3106.014529] x20: ffffffe8d481f3c0 x19: 0000000000610041 x18: 0000000000000001 [ 3106.019506] x17: 0000000000000001 x16: ffffffffffffffff x15: 0000000000000000 [ 3106.024494] x14: ffffb85477fdfe08 x13: 0000ffff9372ffff x12: 0000000000000000 [ 3106.029469] x11: 1fffef4a88a96be1 x10: ffff7a54454b5f0c x9 : ffffb854771b12f0 [ 3106.034324] x8 : 0008000000000000 x7 : ffff7a546c1aa980 x6 : 0008000000000080 [ 3106.038902] x5 : 00000000001207cf x4 : 0000ffff9372f000 x3 : ffffffe8d481f000 [ 3106.043420] x2 : 0000000000610041 x1 : 0000000000000001 x0 : 0000000000000000 [ 3106.047957] Call trace: [ 3106.049522] try_grab_folio+0x11c/0x188 [ 3106.051996] follow_pmd_mask.constprop.0.isra.0+0x150/0x2e0 [ 3106.055527] follow_page_mask+0x1a0/0x2b8 [ 3106.058118] __get_user_pages+0xf0/0x348 [ 3106.060647] faultin_page_range+0xb0/0x360 [ 3106.063651] do_madvise+0x340/0x598 Let's make huge_pte_lockptr() effectively use the same PT locks as any core-mm page table walker would. Add ptep_lockptr() to obtain the PTE page table lock using a pte pointer -- unfortunately we cannot convert pte_lockptr() because virt_to_page() doesn't work with kmap'ed page tables we can have with CONFIG_HIGHPTE. Handle CONFIG_PGTABLE_LEVELS correctly by checking in reverse order, such that when e.g., CONFIG_PGTABLE_LEVELS==2 with PGDIR_SIZE==P4D_SIZE==PUD_SIZE==PMD_SIZE will work as expected. Document why that works. There is one ugly case: powerpc 8xx, whereby we have an 8 MiB hugetlb folio being mapped using two PTE page tables. While hugetlb wants to take the PMD table lock, core-mm would grab the PTE table lock of one of both PTE page tables. In such corner cases, we have to make sure that both locks match, which is (fortunately!) currently guaranteed for 8xx as it does not support SMP and consequently doesn't use split PT locks. [1] https://lore.kernel.org/all/1bbfcc7f-f222-45a5-ac44-c5a1381c596d@redhat.com/
AI-Powered Analysis
Technical Analysis
CVE-2024-45024 is a vulnerability in the Linux kernel's memory management subsystem, specifically related to the handling of huge pages (hugetlb) and page table locking mechanisms. The issue arises from a mismatch in locking protocols when the kernel walks page tables for huge pages that span multiple page table entries (PTEs), particularly on architectures like arm64 with 64 KiB hugetlb folios and 4 KiB base page sizes. The vulnerability is rooted in the fact that the Get User Pages (GUP) code, which walks page tables to pin user pages in memory, uses a different locking approach (pte_offset_map_lock) compared to the locking used when modifying these page tables (mm->page_table_lock) under certain kernel configurations such as USE_SPLIT_PTE_PTLOCKS or USE_SPLIT_PMD_PTLOCKS. This discrepancy can lead to race conditions or inconsistent locking states, potentially causing kernel warnings, crashes, or memory corruption. The vulnerability was demonstrated with a kernel warning and call trace indicating a failure in try_grab_folio(), a function involved in acquiring locks on folios during page table walking. The fix involves harmonizing the locking strategy by making huge_pte_lockptr() use the same page table locks as the core memory management code, adding a ptep_lockptr() function to obtain the correct PTE lock, and handling various kernel configurations and architectures correctly. This patch ensures consistent locking across different page table levels and huge page mappings, preventing race conditions and instability. The vulnerability affects Linux kernel versions prior to the patch and is relevant for systems using huge pages, especially on ARM64 and other architectures with complex page table layouts. No known exploits are reported in the wild as of the publication date.
Potential Impact
For European organizations, this vulnerability poses a risk primarily to servers and systems running Linux kernels with huge page support enabled, particularly those on ARM64 architectures or other affected platforms. Huge pages are commonly used in high-performance computing, database servers, and virtualization environments to improve memory management efficiency. Exploitation could lead to kernel crashes or memory corruption, resulting in denial of service (DoS) conditions that disrupt critical services. While there is no evidence of remote code execution or privilege escalation directly linked to this flaw, the instability caused by inconsistent locking could be leveraged as part of a broader attack chain or cause operational disruptions. Organizations relying on Linux-based infrastructure for cloud services, container orchestration, or virtualization may experience service interruptions or degraded performance. The impact is heightened in environments where uptime and reliability are critical, such as financial institutions, telecommunications, and public sector services. Additionally, debugging and recovery from kernel crashes can incur operational costs and potential data loss if not properly managed.
Mitigation Recommendations
European organizations should prioritize updating their Linux kernel to the patched versions that address CVE-2024-45024 as soon as they become available from their distribution vendors. Given the complexity of the issue, relying on vendor patches is the most reliable mitigation. For environments where immediate patching is not feasible, administrators should consider disabling huge page support if it is not essential, thereby reducing exposure. Monitoring kernel logs for warnings related to try_grab_folio or page table locking anomalies can help detect attempts to trigger the vulnerability. Additionally, organizations should ensure that their kernel configurations avoid split page table locks unless necessary, as this configuration contributes to the vulnerability. For critical systems, implementing robust kernel crash recovery and memory integrity monitoring can mitigate operational impacts. Finally, maintaining strict access controls and limiting untrusted code execution on affected systems reduces the risk of exploitation attempts.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Linux
- Date Reserved
- 2024-08-21T05:34:56.684Z
- Cisa Enriched
- true
- Cvss Version
- null
- State
- PUBLISHED
Threat ID: 682d9826c4522896dcbe0f2b
Added to database: 5/21/2025, 9:08:54 AM
Last enriched: 6/28/2025, 11:56:54 PM
Last updated: 8/17/2025, 1:52:23 PM
Views: 13
Related Threats
CVE-2025-53948: CWE-415 Double Free in Santesoft Sante PACS Server
HighCVE-2025-52584: CWE-122 Heap-based Buffer Overflow in Ashlar-Vellum Cobalt
HighCVE-2025-46269: CWE-122 Heap-based Buffer Overflow in Ashlar-Vellum Cobalt
HighCVE-2025-54862: CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or 'Cross-site Scripting') in Santesoft Sante PACS Server
MediumCVE-2025-54759: CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or 'Cross-site Scripting') in Santesoft Sante PACS Server
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.