diff options
| author | Tianhao Wang <shrik3@mailbox.org> | 2024-06-05 23:01:19 +0200 |
|---|---|---|
| committer | Tianhao Wang <shrik3@mailbox.org> | 2024-06-11 15:17:14 +0200 |
| commit | 38883485c80841f15365d0502418dcc224f01d45 (patch) | |
| tree | 70f49473adccf65d7057570663c095fed8940165 /defs/x86_64-hm-linker.ld | |
| parent | bfe92f51f79f367354a933b78ec2b4e9d5336119 (diff) | |
mm: use linked-list-allocator as kmalloc
I'll implement my own allocator later. Currently using linked-list
allocator [1] to manage the kernel heap (as in kmalloc, not vmalloc). It
manages the ID-mapped region (from VA 0xffff_8000_0000_0000). This
allocator is initialized to use the _largest_ physical memory block. If
the kernel image (text and data) live in this zone then skip the
occupied part.
Key difference between kmalloc and vmalloc:
- kmalloc pretty much manages the physical memory: the allocated address
are within the id-mapped region (see above) therefore the allocated
memory must also be contigous in physical memory. Such memory MUST NOT
page fault. This is prone to fragmentation, so do not use kmalloc to
allocate big objects (e.g. bigger than one 4k page).
- vmalloc manages kernel heap memory and the mapping is managed by
paging. Such memory could trigger pagefault in kernel mode.
Note that the kmalloc conflicts with the previous used stack based PMA
as they operates on the same VM zone.
References: [1] https://github.com/rust-osdev/linked-list-allocator
Signed-off-by: Tianhao Wang <shrik3@mailbox.org>
Diffstat (limited to 'defs/x86_64-hm-linker.ld')
| -rw-r--r-- | defs/x86_64-hm-linker.ld | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/defs/x86_64-hm-linker.ld b/defs/x86_64-hm-linker.ld index c8a213c..fab0699 100644 --- a/defs/x86_64-hm-linker.ld +++ b/defs/x86_64-hm-linker.ld @@ -73,7 +73,6 @@ SECTIONS .t32 : { *(".text32") - *(".text.interrupt_gate") } . = . + KERNEL_OFFSET; |
