Quantcast
Channel: Intel® Software - Intel® Visual Fortran Compiler for Windows*
Viewing all articles
Browse latest Browse all 5691

large data sets with OMP

$
0
0

Hi,

I have a long-running program (many days) which uses many (20 - 40) OMP threads.  It uses a large amount of input data, which is fixed during execution.  I've run into the 2 Gbyte stack limit issue and I'm now somewhat confused about the exact definitions of various compiler/linker options and the windows resource monitor reports.  I'm using the 2013 1.1.139 fortran compiler on 64-bit windows 7.  The data arrays are allocatable but, at the moment, are in modules or common...I understand that makes them static.  I'm thinking of moving the largest one into the main program and passing it to subroutines as needed.

When the stack and heap commit sizes are set to 2 GB and the reserves are set to 0, the program dies when it invokes the OMP section.  Setting /heap-arrays to 0 does not help.  No diagnostics but I assume I'm overrunning the stack or heap.  Adding 2 GB to the stack and heap reserve sizes works, however, when the OMP section is invoked, the Windows Resource monitor zooms to 10s of GB while the percent of used memory and working set reported don't change much.

Questions:

One of Steve Lionel's suggestions (5/16/2011) is to use allocatable arrays in modules rather than in common.  Did that to no effect when reserves set to 0.  It is during the module allocations that the committed and working set values reported by MS Resource Monitor get much larger.  This is in a serial part of the code but I'll soon be working with data sets that will exceed the 2 GB limits in the module as well.  Don't I have to get them out of the module?  Even if I do make everything allocatable in the main program, don't I still have the same problem when my resident data set exceeds 2GB?  If that makes them dynamic (8TB size), I assume the passing of the arrays to the OMP threads in subroutine calls will leave the data in one memory location (with fortran pointer) and not fill the memory with multiple copies.

Are the reserves placed in memory, if available or, as described, virtual memory (=disk?)?  How can the MS committed be much larger than the Linker maximums?

What is the difference, if any, between 'commit' memory according to the Intel compiler and the MS Resource Monitor report?

Why does the MS Monitor report a very large committed size while reporting almost no change in Used Physical Memory when the OMP section starts?  Does each thread get the committed or reserve memory allocation?  If it is committed, doesn't that mean it is used?  The large data sets are shared as the Used Physical Memory seems to indicate.  It seems that the invocation of the OMP, even though it does not consume much more memory, requires a much larger committed space, which somehow causes overflows.

When the MS committed exceeds the actual physical memory (while the used physical memory is well below it), does this affect execution, virtual memory, or ...?

The MS 'working set' has stayed under 2 GB.  Is the working set the amount of memory actually used?  If so, why am I blowing off if the reserves are not set?

Is Windows 8 also limited to these 2GB limits?

I gather that 'large address aware' has nothing to do with this issue? (I've tried it but not in all possible combinations.)  Is there some other method folks are using to get around this issue with large static data sets?

Once I more fully understand what these choices and reports mean, I'll be able to properly change to code to accommodate the forthcoming much larger data sets while not sacrificing speed as the programs already take up alotta time.

--Thanks,  Bruce


Viewing all articles
Browse latest Browse all 5691

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>