Jump to content

StringRegExp - string passed by value instead by reference?


 Share

Recommended Posts

A small example:

; create a very large string (should be about 400 MB because UTF-16 is used)
Global $sString = ""
For $i = 1 To 20 * 1024 * 1024
    $sString &= "xxxxxxxxxx"
Next

; precompile pattern
StringRegExp("", "^.")

; determine time required for a small input string:
$iT = TimerInit()
StringRegExp("Test", "^.")
ConsoleWrite(StringFormat("RegEx with small input string: % 8.3f ms\n", TimerDiff($iT)))

; determine time required for a large input string:
$iT = TimerInit()
StringRegExp($sString, "^.")
ConsoleWrite(StringFormat("RegEx with large input string: % 6.1f   ms\n", TimerDiff($iT)))

As you can see the execution time depends directly on the size of the input string although only the first character of the string is processed.
If instead of StringRegExp a StringLeft or a StringMid is used then the execution times are independent of this.

Since the pattern itself should basically only process the first few bytes of the string memory area, the time loss must be somewhere else.
This leads me to the assumption that in the implementation of StringRegExp() at some point the input string is passed as "by value".
This would require the creation of a local copy of the string and would explain the loss of time.

Hence my two questions:
Do you know if this is really a "by value" problem or if it is caused by other reasons?
If it is a "by value" problem, is there anything that speaks against changing StringRegExp in a next version of AutoIt? 
This would lead in a massive jump in performance for large strings.

Link to comment
Share on other sites

I don't know how easy the change can be. PCRE (the regex engine used by AutoIt) is a C library, expecting arguments as C null-terminated strings, while AutoIt uses variants which (I guess) don't have a terminating null. I suspect this forces a temporary copy of the string, unless a specific machinery is deployed.

This is of little importance for most cases (small- to medium-sized strings) but indeed it can be a disadvantage when processing a large input.

You can file a feature request on Trac about this.

This wonderful site allows debugging and testing regular expressions (many flavors available). An absolute must have in your bookmarks.
Another excellent RegExp tutorial. Don't forget downloading your copy of up-to-date pcretest.exe and pcregrep.exe here
RegExp tutorial: enough to get started
PCRE v8.33 regexp documentation latest available release and currently implemented in AutoIt beta.

SQLitespeed is another feature-rich premier SQLite manager (includes import/export). Well worth a try.
SQLite Expert (freeware Personal Edition or payware Pro version) is a very useful SQLite database manager.
An excellent eBook covering almost every aspect of SQLite3: a must-read for anyone doing serious work.
SQL tutorial (covers "generic" SQL, but most of it applies to SQLite as well)
A work-in-progress SQLite3 tutorial. Don't miss other LxyzTHW pages!
SQLite official website with full documentation (may be newer than the SQLite library that comes standard with AutoIt)

Link to comment
Share on other sites

4 minutes ago, jchd said:

while AutoIt uses variants which (I guess) don't have a terminating null.

The AutoIt-variant is a class which is able to hold different datatypes.
For this an attribute of this class is a type flag ("m_nVarType") which holds the information which internal type is used to save the value.
If the type is "VAR_STRING" then the value is stored as a normal c-style char-Array (declared as "char *m_szValue").

So this should'nt be a problem.
But I was thinking along the same lines as you. I didn't study the API of PCRE but it might turn out that you need a copy at some point.
Therefore my question if anybody knows more about this (especially the devs).

 

20 minutes ago, jchd said:

but indeed it can be a disadvantage when processing a large input.

Exactly - it can even mean the difference between usable and completely unusable.
Example: I have a json parser which processes the JSON string sequentially. For a large string this means several thousand calls of StringRegExp.
Basically a matter of seconds or milliseconds. But because every small StringRegExp now needs 500ms (only because of the "by value"), the approach fails completely.

 

18 minutes ago, jchd said:

You can file a feature request on Trac about this.

That was my intention, of course. But I wanted to discuss the matter before I fill the tracker with rubbish.

Link to comment
Share on other sites

5 hours ago, AspirinJunkie said:

As you can see the execution time depends directly on the size of the input string...

Where should I “see” it?

Was the output included?

Judging by the format specifiers used, 8.3f vs 6.1f, I’m guessing the big string could be 100 slower, hence the added precision, but that’s just a guess.

 

 

Code hard, but don’t hard code...

Link to comment
Share on other sites

16 hours ago, AspirinJunkie said:

That was my intention, of course. But I wanted to discuss the matter before I fill the tracker with rubbish.

I do not think there will be much enthusiasm from the devs about this :rolleyes:

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...