Question Reduce Memory Consumption, Please Confirm

Codex

Member
Joined
Mar 17, 2011
Messages
7
Programming Experience
3-5
I found what seems to be an easy and effective way to reduce the memory consumption of my vb.net app. What I need to know is how "correct" or "proper" is the method I'm using for my application and what exactly is going on behind the scenes that I should worry about? I want to release memory every time a form closes. Here is how I reduce the required memory of my program:

VB.NET:
[SIZE=2]Process.GetCurrentProcess.MinWorkingSet = Process.GetCurrentProcess.MinWorkingSet[/SIZE]
[SIZE=2]Process.GetCurrentProcess.MaxWorkingSet = Process.GetCurrentProcess.MaxWorkingSet[/SIZE]

I am simply applying the existing values, and this seems to force the framework to effectively "trim" my application memory (significantly). All my program does is display a few forms with some buttons and thats it. Would it be ok to make the above calls all the time (for example every time I close the form) so memory actually gets released to the system? I'm asking if it's OK to do this, not if anyone suggests that I should leave it alone and let the GC and VMM do their job. They do a great job, for programs that have potential of using more RAM. Mine will not and I therefore want to keep the footprint absolutely minimal, and the above method works great. I just need to know if I am safe to make those calls all the time. Thanks in advance.
 
Last edited:
Similar discussed here: How to reduce the WorkingSet for a process?
and here: delphi - When to call SetProcessWorkingSetSize? (Convincing the memory manager to release the memory) - Stack Overflow
So what appears to be happening is that unused parts of working set is written to the harddisk page file, causing slower operations when the allocated space is needed.
Same behaviour can be seen in Task Manager in previous Windows versions when you minimize the app, the app enters a dormant state and its allocations can be swapped to file, the memory is not freed though.
 
Thank you for your reply, I've come across many similar links and the one that threw me off was the API call to SetProcessWorkingSetSize, some people indiicated it could cause an exception if the framework needed to adjust the worket set of your managed process after calling this native API so i kept on searching for a "legitimate" way to accomplish the same task, that's what made me wonder if the managed functions I found were ok to call.

So I've come to the conclusion that my function calls are completely acceptable, but it really depends on the circumstance as to whether or not the performance hit of paging the memory to disk is acceptable or not. Some situations would make this a bad idea (if you load lots of data to RAM and frequently need access to it), other situations would be fine. I've been testing my code on slower machines around 800MHz and slower hard disks to try and detect a performance hit for my particular scenario, but It doesn't seem to effect my performance because all I'm showing is a form with some buttons, not performing any data intensive operations nor loading large amounts of data to ram. The amount of memory my program needs to fetch from page file is so small the performance hit is negligable. So for my application, paging memory to disk is acceptable and while it doesn't actually free the memory, it does move it from physical RAM to disk, allowing more physical RAM available to the machine. At least monitoring my physical ram usage while making those calls clearly indicates that is what's happening.

So I think i've got a clear understanding of whats going on here, and when to and not to use the above mentioned functions. Thank you for your confirmation, i really appreciate it.
 
The managed Process class uses SetProcessWorkingSetSize function to set min/max working set, so it is the same call performed.
As it is just cosmetic I would leave it alone. Memory for a .Net process is managed, and it allocates more space than is currently used based on declarations in the assembly, which may at some point be needed to improve performance. The allocations that are not currently used will automatically be released to system if asked for, else that memory will be ready for 'next operation'.
.Net process memory also consist of parts shared with other .Net processes and the .Net runtime, so when you force allocations to be moved to page file you could also be affecting the performance of those.
 
Thanks for your clarification. So it's not just the private working set that is cached, it's the whole working set that can contain shared memory to other framework runtime objects? Well I misunderstood initially because the call had to be made with MY process handle, so one would think it would only cache memory in the working set for my process objects, but I understand exactly what your saying... some of MY process objects are shared.

Is there any way I can page only the private working set so only my process's memory is paged without effecting any of the shared objects my app may be using? Like i said my app can deal with the performance cost, but I can't deal with the same performance being sacrificed to who knows what other external processes and .net runtime objects that are shared with my process' objects, that's unacceptable. I was really enjoying watching my app consume about 1MB of RAM in the background. Any way you know of to exclude the shared objects, or simply only page the private working set and not the whole thing?
 
me said:
could also be affecting the performance of those.
I said could, I know the reported memory of a .Net process consist also of shared memory, but I don't know if when paged also those are affected. If they are then the operation has 'system wide' implication, if not all is good if that is your intention. My objection is that it really doesn't matter and 'what you see is not what you get', .Net being a managed platform for memory and other resources means it is inherently tuned for best management and performance for these matters, for benefit of your and all other .Net processes. Performing such calls just to 'see' 1mb in Task Managed is IMO uncalled for. After all allocations does not reflect real usage, and those allocations are also immediately available to any other process that needs more physical memory.
 
I see. I would figure the same thing because the MinWorkingSet [/URL]and MaxWorkingSet [/URL]documentation clearly state that it contains both private and shared data, so one would assume that all of it could potentially be cached.

However I suppose the answer is in Working Set[/URL] documentation where it states the following:
If several processes share a page, removing the page from the working set of one process does not affect other processes. After a page is removed from the working sets of all processes that were using it, the page becomes a transition page. Transition pages remain cached in RAM until the page is either referenced again by some process or repurposed (for example, filled with zeros and given to another process). If a transition page has been modified since it was last written to disk (that is, if the page is "dirty"), then the page must be written to its backing store before it can be repurposed. The system may start writing dirty transition pages to their backing store as soon as such pages become available.

So i think it's ok to make those function calls after all! Thanks again your detailed discussion, it's much appreciated. :)
 
Last edited:

Latest posts

Back
Top