https://security.stackexchange.com/questions/269507/is-ram-wiped-before-use-in-another-lxc-container Stack Exchange Network Stack Exchange network consists of 181 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange [ ] Loading... 1. + Tour Start here for a quick overview of the site + Help Center Detailed answers to any questions you might have + Meta Discuss the workings and policies of this site + About Us Learn more about Stack Overflow the company, and our products. 2. 3. current community + Information Security help chat + Information Security Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog 4. 5. Log in 6. Sign up Information Security Stack Exchange is a question and answer site for information security professionals. It only takes a minute to sign up. Sign up to join this community [ano] Anybody can ask a question [ano] Anybody can answer [an] The best answers are voted up and rise to the top Information Security 1. Home 2. 1. Public 2. Questions 3. Tags 4. Users 5. Companies 6. Unanswered 3. Teams Stack Overflow for Teams - Start collaborating and sharing organizational knowledge. [teams-illo-free-si] Create a free Team Why Teams? 4. Teams 5. Create free Team Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Is RAM wiped before use in another LXC container? Ask Question Asked 2 days ago Modified today Viewed 30k times 34 Edit: though my wording may not be exact, I understand that two containers don't have access to the same memory at the same time. My question is can one container see data that another has deallocated? LXC allows us to over-provision RAM. I assume this means that if one container needs a bunch of RAM and another container isn't using its allotment, then the unallocated RAM goes to the container that needs it. Let's say that one container had some private keys loaded, and that memory was deallocated, and another container just allocates its maximum heap and starts walking it. Is there the possibility of reading that private key? Or is it wiped or otherwise allocated in a way that prevents data leakage? Where is the documentation that clarifies this? (my serach-fu is weak on this - probably because I don't know the right terms) * container * lxc Share Improve this question Follow edited yesterday coolaj86 asked 2 days ago coolaj86's user avatar coolaj86coolaj86 42111 gold badge44 silver badges99 bronze badges 4 * 10 For your mental model, it's also important to note that containers are not virtual machines. Containers do not manage their own pool of memory. Containers are just a group of one or more processes, with some extra Linux security features enabled. For example, containers will typically use "cgroups" to limit their resource usage. Just like normal processes cannot access other processes' address space, containers (which are processes) cannot do that, and are constrained by the kernel's security model. - amon yesterday * 2 LXC containers contain processes, and memory given to processes is cleared beforehand, so the answer is "no" - LXC doesn't weaken the existing process isolation model. - Toby Speight yesterday * Is the hypothetical container here started with the privileged flag? (UID 0 doesn't provide that much by way of extra privileges anymore; instead, that's managed by capabilities; Docker drops dangerous capabilities like CAP_SYS_ADMIN, unless you use --privileged). - Charles Duffy 6 hours ago * ...similarly, a lot of ability to read raw memory accrues to any process with the CAP_SYS_RAWIO capability, but that's another thing that's not going to be available in Docker unless you started a container as --privileged. - Charles Duffy 6 hours ago Add a comment | 4 Answers 4 Sorted by: Reset to default [Highest score (default) ] 61 This isn't how memory allocation in linux works, so your scenario is not right. The linux kernel maintains a pool of free pages and quickly freeable pages (which includes cached disk blocks and process pages already written to swap but still in ram). Of these pages, it also keeps a pool of pre-zeroed pages (the size of this pool is adjustable). When a process needs a new memory page, it is pulled from this pool. When a process asks for a new page, it will get a zeroed page, and won't get a page with data from another process. Generally, a process can't access another process's memory, but there are many exceptions (see a partial list below). While new pages will arrive from the kernel zeroed, some memory management libraries may reuse memory rather than releasing freed pages to the OS, and these might not be zeroed (depending on the library and API call used), containing old data from the heap and stack from its previous use within the process. This sometimes can be a security risk and source of bugs from reading uninitialized variables, but zeroing them is also considered a performance issue, especially if the code using the memory will immediately initialize it anyway. Overprovisioning doesn't mean ram is given to both programs. It means it is given to neither of them until the last second. So, let's say program A and B both have 1G of memory allocated to them. The system has 8G. Now both A and B (simultaneously or sequentially) ask for 5G. With overprovisioning, we can grant that request, but not actually give either one the memory -- just the address space. The system still has 6G of memory in the free pool. A and B have "requested" a total of 6G each, but are each only using 1G and only have that 1G assigned to them. But they each have page table entries (with no assigned pages) for another 5G. So the first time one of them writes to one of the newly requested pages, it gets a soft page fault, which causes a (pre-zeroed) page to be pulled from the free pool and assigned to that page table entry, and then the soft faulted write is allowed to continue with a real memory page assigned to it. If the two programs allocate and use memory slowly, and perhaps never use everything they requested, this all works fine. If you have some swap, possibly some unused or infrequently pages in the program get written to virtual memory (swap) and returned to the free pool. However, if both of them end up with a combined working set of used pages greater than the system physical memory, then either one or both of them will get killed with an OOM error (out of memory) (if you don't have enough swap to cover it), or the system will start thrashing as it tries to constantly swap pages between physical ram and virtual memory. The alternative to overprovisioning is to immediately deny the memory request if there isn't enough virtual memory to cover it. Many programs are not written to handle this denial and will crash due to bugs, or just crash because they can't continue without the memory. So frequently, at worst, overprovisioning delays the program's death (or makes the system thrash), and at best, it avoids some possible nasty bugs and allows programs that request memory they might not actually use to continue running as if they got it anyway. Adding containers to this does not change it at all. When you provision the container, you don't assign memory to the container (loading the container and running it does that live, as needed), it assigns memory limits to the container. When enough actual pages are assigned to the things in the container to exhaust those assigned limits, then the things in the container will get an OOM kill just like above. If you've overprovisioned the containers and they try to reach those limits all at once, you'll get either thrashing or the OOM kill when the system's memory is exhausted, before the container's memory limit is reached. It is also possible to tweak the container memory allocations so that one container can thrash while other containers perform normally. Here is a partial list of cases where a process can see another process's memory: * Immediately after a fork, the parent and child share all memory pages. The linux kernel marks these as copy on write (with a reference counter) so these pages are shared read only, and the first write by either process clones the page so it is no longer shared. * A process can clone itself, sort of like fork, but with more control over which parts of the process are shared writable and which parts are COW cloned (as with fork). If almost nothing is cloned, it acts like a thread or light weight process. * A process can explicitly share a page to another process through multiple mechanisms, and this can have full bidirectional write for both processes. (The oldest form of this is SysV shared memory which is all but obsoleted by other more flexible methods.) * A process can debug (ptrace) another process and get full access to its memory and execution flow. However, since this is such a huge security risk, this is generally only allowed for root or for for a parent process to debug its child; The main use is for a debugger (like gdb) to start a process to debug. However, programs like strace and ltrace can do this without root access. And this can be relaxed via kernel option so gdb can attach to any running process a user owns. * A program can transfer a page of memory to another process via pipe or socket, but this acts more like a copy than a real sharing, especially if the receiving program doesn't try to read it with the same page/byte alignment. * The shared library system is entirely based on multiple processes sharing read only pages of libraries, and obviously, the executables of processes are also shared this way. Share Improve this answer Follow edited yesterday answered 2 days ago user10489's user avatar user10489user10489 1,74511 gold badge66 silver badges1515 bronze badges 7 * 3 I appreciate the thoroughness of the answer, and there is some nuance there that I wasn't aware of before, but my question is whether or not it's possible for a container to allocate memory that has data in it from prior use by another container (or the host), or if it is sanitized by things like the kernel user namespace protections. - coolaj86 yesterday * 10 Answered that, guess its too buried. New pages come from the pool of free pages already zeroed. The kernel doesn't hand out pages before they are blanked, and overprovisioning doesn't cause pages to be shared. Having said that, there have been bugs that caused unzeroed memory to be leaked, but once discovered, those bugs are squashed quickly. - user10489 yesterday * I've proposed an edit adding a section to your answer to summarize what I now believe from a more careful reading of your response. Would you please take a look at that and correct any poorly worded or incorrect language? - coolaj86 yesterday * Your suggested edit is overly broad and wrong in several points. I'll see if I can fix it. - user10489 yesterday * 1 If the namespace is working correctly and there are no leaks, processes in the container can't even see processes outside the container, let alone access them. - user10489 17 hours ago | Show 2 more comments 15 Processes in LXC containers are normal processes as far as the Linux kernel is concerned. They are separated from most of the host's resources by using namespaces, which does not make them a special kind of process. This is different from how virtual machines work. When Linux (or another OS) allocates new pages to a process, they are zero-filled. IIRC there is a Linux kernel configuration option that allows a process running as root to ask for non-zero-filled pages, which may contain leftover data from other processes. However, this is very rarely enabled - only for embedded systems with no security needs. Share Improve this answer Follow edited 6 mins ago answered yesterday user253751's user avatar user253751user253751 4,76033 gold badges2121 silver badges1818 bronze badges 9 * Am I correct that, when using namespaces, the root user of the container is mapped in such a way that it does NOT have permission to access the memory outside of its container? - coolaj86 22 hours ago * Containers have nothing to do with memory access. - user10489 22 hours ago * @user10489 In this case I'm talking about the privilege escalation - not memory management. I suppose it would be more correct to say "cgroup". The root user of a cgroup that is constrained by user namespaces cannot get access to anything outside of its cgroup, whereas a root user that is not constrained by the namespace can access memory outside of its cgroup. Is that correct? - coolaj86 19 hours ago * You are conflating processes (which cgroups and therefore containers constrain) with memory. Containers have nothing to do with limiting memory access, they only constrain access to processes and namespaces and limit total memory use, not access. As has been said repeatedly in every answer here, processes constrain memory access, not containers. Memory is assigned to processes. - user10489 17 hours ago * @coolaj86 The "root user in a container" is not the operating system's root user, so it doesn't have any elevated permissions. Not an expert on lxc, but I'd assume the user in a lxc container is a namespaced user (a new user in a new namespace) that can't see or do anything outside of its own namespace. - Cigarette Smoking Man 13 hours ago | Show 4 more comments 4 Yes, if memory used by a process in one LXC container is later allocated to another process in another LXC container, the contents will definitely be wiped. This is the case for all processes and has nothing to do with containers. You asked: "Where is the documentation which clarifies this?" I doubt that there is any specific documentation which addresses your question; if one understands how 'containers' are implemented, then it is obvious. Share Improve this answer Follow answered yesterday Alex D's user avatar Alex DAlex D 22111 silver badge33 bronze badges Add a comment | -2 INCORRECT answer Neither implementation of free or variations of alloc show any memory cleanup (as is filling in with 0s or random content) in glibc https:/ /codebrowser.dev/glibc/glibc/stdlib/stdlib.h.html#free / https:// codebrowser.dev/glibc/glibc/malloc/malloc.h.html so if applications in both the containers were using applications using glibc based memory management AND not performing their own cleanup before free application in other container should be able to see the contents from previous container. In case you want to make 100% sure no other container can read the data from your container's released memory, you must free it before calling free() or equivalent OR use a libc that implements free in such a way that it zeros out memory. EDIT details in comment, why this answer is not correct. Share Improve this answer Follow edited 9 hours ago answered 11 hours ago i_am_on_my_way_to_happiness's user avatar i_am_on_my_way_to_happinessi_am_on_my_way_to_happiness 111 bronze badge New contributor i_am_on_my_way_to_happiness is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. 5 * "if applications in both the containers..." but wouldn't it have to pass back to the kernel for the memory to be transferred between the different processes? According to the currently accepted (convincing) answer, the kernel only hands out zeroed out memory. - Luc 10 hours ago * This is incorrect. To pass from a process in one container to a different process in another container requires passing from one process to another and, in general, the kennel is quite hesitant to allow that. Except in extremely rare cases (generally either: two processes agree to share memory, both process have access to a single shared resource, or one process has elevated privileges over another), memory will always be zeroed before a process is allowed to observe it. This applies regardless of whether containers are involved. - jade 10 hours ago * Yes, you both are right. cow_user_page (appeared in 2.6.15) and does memset lxr.linux.no/#linux+v2.6.30.5/mm/memory.c#L1851 not sure what used to happen earlier. kmap_atomic lxr.linux.no/# linux+v6.0/include/linux/highmem.h#L134 - i_am_on_my_way_to_happiness 9 hours ago * 2 Then this needs to be deleted? - schroeder 9 hours ago * As it's currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. - Community Bot 8 hours ago Add a comment | You must log in to answer this question. Highly active question. Earn 10 reputation (not counting the association bonus) in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity. Not the answer you're looking for? Browse other questions tagged * container * lxc . * The Overflow Blog * From Smalltalk to smart contracts, reflecting on 50 years of programming (Ep.... * From cryptography to consensus: Q&A with CTO David Schwartz on building... sponsored post * Featured on Meta * Improving the copy in the close modal and post notices - 2023 edition Related 1 Why do we use SELinux Policies as it overlaps other controls such as Linux Namespaces, K8S/Container security? Hot Network Questions * Shortest distinguishable slice * Can I disengage and reengage in a surprise combat situation to retry for a better Initiative? * Is std::size_t a distinct type? * Hyperlink image in PowerShell * Fermat's principle and a non-physical conclusion * To prove irrationality, why integrate? * Why can I not self-reflect on my own writing critically? How can I self-edit? * What page type is page 516855552? * I have seven steps to conclude a dualist reality. Which of these steps are considered controversial/wrong? * Implement grambulation * Why is it forbidden to open hands with fewer than 8 high card points? * Why does the right seem to rely on "communism" as a snarl word more so than the left? * Book about a mysterious man investigating a creature in a lake * Corrections causing confusion about using ni over de * In a postdoc position is it implicit that I will have to work in whatever my supervisor decides? * Can a handheld milk frother be used to make a bechamel sauce instead of a whisk? * Seal on forehead according to Revelation 9:4 * Why is drain-source parasitic capacitance(Cds) omitted in JFET datasheets? * Book where Earth is invaded by a future, parallel-universe Earth * Why is my multimeter not measuring current? * A small script that analyses a sentence * Why is China worried about population decline? * Why would Donald Trump be arrested? * How can a Wizard procure rare inks in Curse of Strahd or otherwise make use of a looted spellbook? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. [https://security.sta] * Information Security * Tour * Help * Chat * Contact * Feedback Company * Stack Overflow * Teams * Advertising * Collectives * Talent * About * Press * Legal * Privacy Policy * Terms of Service * Cookie Settings * Cookie Policy Stack Exchange Network * Technology * Culture & recreation * Life & arts * Science * Professional * Business * API * Data * Blog * Facebook * Twitter * LinkedIn * Instagram Site design / logo (c) 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2023.4.5.43379 Your privacy By clicking "Accept all cookies", you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings