Jump to content
Sign in to follow this  
Quinch

Pre-allocating large arrays

Recommended Posts

Quinch

Just thought to check - how are arrays stored, in terms of memory usage? Will declaring a large array slow down the execution compared to declaring a small one, assuming they hold an equal number of values?

I'm knocking together a script that does most of its work through several arrays - as a value is entered, the arrays are widened to accommodate the new information. However, would it slow the script down much if I simply defined a very large array from the start and said nerts to ReDim?

Share this post


Link to post
Share on other sites
dbzfanatic

As long as you have the default value of the indexes as 0 I don't think it would have that much of an impact on your speed, I could be wrong but that's what common sense says to me.

Share this post


Link to post
Share on other sites
Quinch

Errr, default value of the indexes?

Share this post


Link to post
Share on other sites
picaxe

Just thought to check - how are arrays stored, in terms of memory usage? Will declaring a large array slow down the execution compared to declaring a small one, assuming they hold an equal number of values?

I'm knocking together a script that does most of its work through several arrays - as a value is entered, the arrays are widened to accommodate the new information. However, would it slow the script down much if I simply defined a very large array from the start and said nerts to ReDim?

I've seen improvements of 8x by declaring a large array and not ReDim-ing within a loop.

Share this post


Link to post
Share on other sites
Quinch

Alright, so there's no particular downside to declaring say, a [10000][3][10000][1000] array right off the bat and populating it as needed?

Also, does the 2^24 limit refer to the total size of the array, or just the amount of values that can be inserted into it?

Share this post


Link to post
Share on other sites
picaxe

I believe the sum of elements in all dimensions must not exceed 2^24.

Share this post


Link to post
Share on other sites
PsaltyDS

The speed of access the array is no different between a large or small array. The extra steps to ReDim the array cause the slow down. This can be minimized by ReDim'ing in increments as required, like adding 100 to the first index each time.

An experiment to show what the penalty for declaring unnecessarily large arrays is --

1. Open Task Manager, on the Processes Tab.

2. Run this script from SciTE:

For $n = 1 To 10
    Dim $avArray[1000][16][1000]
    Sleep(3000)
    Dim $avArray[10][16][1000]
    Sleep(3000)
Next

3. Watch the AutoIt3.exe process, on the Memory column as the declared array changes size every three seconds.

:mellow:


Valuater's AutoIt 1-2-3, Class... Is now in Session!For those who want somebody to write the script for them: RentACoder"Any technology distinguishable from magic is insufficiently advanced." -- Geek's corollary to Clarke's law

Share this post


Link to post
Share on other sites
Quinch

So inna nutshell, it's going to use a heckload of memory while it executes, but on the whole, there won't be any difference in execution speed?

Cool, I can live with that. Thanks.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×