I'm using AutoIt for several years now and I really get the hang of it!
I'm quite the curious OCD perfectionist kind of guy, so I can't help wondering..
what would be the best way to program stuff for the compiler / interpreter / scripting engine?
We're talking about the inner workings of the AutoIt's core here, and how to give it as less friction as possible but also take care of the machine running the script.
Imagine a script where we would constantly have to assign a Boolean value to a variable:
; A: local $bool = false $bool = true $bool = true $bool = false ; B: local $bool = false check(true) check(true) check(false) func check($b) if $bool = $b then return $bool = $b endfunc In this case, would it be better to just overwrite (A) the variable or first check if we really need to (B)?
What would be best for the computers memory if it had to do this for a year non stop?
Another example, imagine you're writing a function with an if statement.
If you would look under the hood of AutoIt, what would be the best way to give your computer as less work / code nesting stack filling as possible:
; A: func decide($b_Input) if $b_Input then ;do something else ;do something else endif endfunc ; B: func decide($b_Input) if $b_Input then ;do something return endif ;do something else endfunc Last one for now:
; A: while 1 ; do stuff wend ; B: while true ; do stuff wend Isn't AutoIt taking an extra step in converting 1 to a Boolean in example (A)?
Or is it the other way around and does the (B) way make AutoIt first convert a keyword (true or false) to a numerical value (0 or 1).
I think this kinda detail stuff is quite interesting, makes me wonder how AutoIt converts and runs our code.
What are your opinions on this topic?
Any coders who know more about the inner workings of AutoIt?
Any people like me who ask themselves similar questions (with examples)?
Let me know! 😉
I'm trying to make a listing of all files in subdirectories on a specific level in a folder structure. Let me explain, visually, this will help things a lot.
I have a folder structure like this:
ROOT |--- SUBDIR 1 | |---- SUBDIR 1.1 | |----- SUBDIR 1.1.1 | |---- File1.ext | |----- SUBDIR 1.1.2 | |---- File2.ext | |----- SUBDIR 1.1.3 | |---- File2.ext | |----- SUBDIR 1.1.4 | |---- File2.ext | |----- SUBDIR 1.1.5 | |---- File2.ext | |---- SUBDIR 1.2 | |----- SUBDIR 1.2.1 | ..... | |---- SUBDIR 1.3 .... I use _FileListToArrayRec twice:
- once to make an array of the specific directories I should be working at: I need all files on the x.x level, so it will go just until that depth using a negative integer for $iRecur
- once again to create an array of all files found under that directory and its subdirectories (level x.x.x\files...)
What happens now is that _FileListToArrayRec will always include all levels before the maximum depth is reached. The result would look like this
Row 0 15 Row 1 Root\Subdir 1 Row 2 Root\Subdir 2 Row 3 Root\Subdir 3 Row 4 Root\Subdir 1\Subdir 1.1 Row 5 Root\Subdir 1\Subdir 1.2 Row 6 Root\Subdir 1\Subdir 1.3 ... Needless to say that when my second function iterates over this array, it will find all files twice. Once on the x level, once again on the x.x level. There is no way for me not to use the recursive option in the second iteration, since the files are actually in a subdirectory there.
Where are the wizards of programming logic here? Since I can't seem to find a comprehensible or easily implementable solution for this issue.
Thanks in advance and kind regards,
I was looking around the forum if there were some customizable solutions about creating a PDF from "0" to something like a report...
What I'd like to do is something with a header ( 2 logos and a title ), with a table which contains data read from a file
At the moment, I was working with HTML, since I know it and it's very simple to do a table with some data inside...
But know, I'm a bit stuck about the exporting the HTML page to PDF... And, here too, if someone knows how to do it, please, I'm here listening
Hello, first time poster here
I am working on a project that has to parse a log file in real time. The thing is I know it's hard for Autoit to attach itself to log files when they're already in use by other programs, at least in my experience.
I was taking a look at this thread because the log file is quite large and I think Autoit might be a little slow on it's own.
The thing is I don't know how to use this properly to extract all data out of a log file or is there a native way to do this using Autoit.
Basically , I just need a log parser that is able to read from a log that is 'already opened' and 'being written to'
Are there any people here that know how to use a parser generator that uses 'bnf' (not 'ebnf'!)
i'm using the gold parser:
not possible in autoit:
What i'm looking for is the best way to parse a list:
<List> ::= <Item> ',' <List> | <Item> <Item> ::= Number | String for example, the tree returned from:
'test', 1, 2 is:
since its not possible in AutoIT code I did it in vb script.
this is what i did:
' in the parse loop: Case Rule_List_Comma set result = new list call result.input(.tokens(0).data,.tokens(2).data) ' the list class: class list private arg0 private arg1 public sub input(a,b) set arg0 = a set arg1 = b end sub private sub push(item,byref stack) redim preserve stack(ubound(stack) + 1) stack(ubound(stack)) = item end sub public function value value = array(arg0.value) value2 = arg1.value if isarray(value2) then for each thing in value2 push thing,value next else push value2,value end if end function end class
is there a better way to do this?