Jump to content

Help with removing blank lines from an array ?


archies
 Share

Recommended Posts

Hey guys,

I just started using AutoIt yesterday and I think I'm learning a lot about it thanks to everyones help here.

I'm trying to remove blank lines from a text file that I have read to an array.

test.txt:

This is some line of text
This is some line of text
This is some line of text
This is some line of text

This is some line of text
This is some line of text

This is some line of text
This is some line of text

This is some line of text

New file should look like this:

test.txt:

This is some line of text
This is some line of text
This is some line of text
This is some line of text
This is some line of text
This is some line of text
This is some line of text
This is some line of text
This is some line of text

Here is what I have got so far:

#include <Array.au3>
#include <File.au3>

_FileReadToArray(c:\test.txt, $StrArray)

For $n = 1 To $StrArray[0]
        If StringLen($StrArray[$n]) = 0 Then
            _ArrayDelete($StrArray,$n)  <-------SOMETHING IS WRONG HERE.
        EndIf
Next

_ArrayDisplay($StrArray, "the new text")

See, I figured out that I can detect a blank line with If StringLen($StrArray[$n]) = 0. But I can't seem to figure out what to do in order to make this blank line removed. :P

Edited by archies
Link to comment
Share on other sites

Hi again,
#include <File.au3>

Global $TestFile = @ScriptDir & "\test.txt", $String
Dim $MyArray

_FileReadToArray($TestFile, $MyArray)
For $i = 1 To $MyArray[0]
    If $MyArray[$i] <> "" Then $String &= $MyArray[$i] & "|"
Next
Dim $MyArray = StringSplit(StringTrimRight($String, 1), "|")    
_FileWriteFromArray($TestFile, $MyArray, 1)oÝ÷ Ø(^?ªê-y8b²+ ¢[ºÒ0¢é]j[(i×Zrú+)Þ²ØZµªÞgºY[ºØZ½é趧»­Â­zÊZq觶¦&ë-o+py©"az׬¶ÚºÚ"µÍRYÝ[ÔÝÔÊ ÌÍÓ^P^VÉÌÍÚWK
H   ÉÝÈ  ][ÝÉ][ÝÈ[

This is good if you need the result in an array. If you just need the data written back to a file, then StringSplit() for the new array is not required, just use @CRLF for the delimiter in the string and then write the whole thing to the file in one go.

:P

;)

Valuater's AutoIt 1-2-3, Class... Is now in Session!For those who want somebody to write the script for them: RentACoder"Any technology distinguishable from magic is insufficiently advanced." -- Geek's corollary to Clarke's law
Link to comment
Share on other sites

@smashly

I love the way you solved this :P I would never have thought to do it this way ;)

Arrays are very handy and working with them is usually easy ... until it comes to delete elements, then it gets tricky. Indeed there is array.au3 UDF having _ArrayDelete function which deletes an element and ReDim the original array - that is also a very handy function but it gets tricky once you have to do a job similar to this one in OP. Why it gets tricky? because the For-Next loop is started with the original value which is either an UBound or $array[0]. By deleting elements and ReDim-ing the array, the numbers of elements gets lower and at some point the loop counter will go over the actual array dimension generating that hated error: "Array variable has incorrect number of subscripts or subscript dimension range exceeded."

Fortunately, smashly's way is much easier :)

SNMP_UDF ... for SNMPv1 and v2c so far, GetBulk and a new example script

wannabe "Unbeatable" Tic-Tac-Toe

Paper-Scissor-Rock ... try to beat it anyway :)

Link to comment
Share on other sites

Hello archies,

Try this...

#include <Array.au3>
#include <File.au3>

Dim $StrArray[1]
_FileReadToArray( "c:\test.txt", $StrArray )

For $n = $StrArray[0] to 1 step - 1
    If StringLen( $StrArray[$n] ) = 0 Then
        _ArrayDelete( $StrArray, $n )
    EndIf
Next

_ArrayDisplay($StrArray, "the new text")

The problem was the loop was set to iterate the total number of elements before they were removed. When the elements were deleted, the total number of elements was reduced. Counting up means we will eventually reference an element that no longer exists. To get around this, iterate the elements in reverse.

Zach...

EDIT: enaiman's reply describes it better.

Edited by zfisherdrums
Link to comment
Share on other sites

Hello archies,

Try this...

The problem was the loop was set to iterate the total number of elements before they were removed. When the elements were deleted, the total number of elements was reduced. Counting up means we will eventually reference an element that no longer exists. To get around this, iterate the elements in reverse.

Zach...

That works great, and is simpler to code that the string assembly/StringSplit() method. The only problem is it is darn slow. Every loop through the _ArrayDelete() function has to copy all the elements below the one deleted, one at a time, and then ReDim the array.

:P

Valuater's AutoIt 1-2-3, Class... Is now in Session!For those who want somebody to write the script for them: RentACoder"Any technology distinguishable from magic is insufficiently advanced." -- Geek's corollary to Clarke's law
Link to comment
Share on other sites

OMG you guys are good! :P

So basically I can go with zfisherdrums' "clean code" method. Or I can go with smashly's "faster processing" method...

Well, I've tested them both and they both do exactly what I need. And both are pretty fast too. I can't tell the difference in speed really. It's faster than the blink of an eye (though I am testing it with smaller files at the moment). It would be interesting to see how they handle large files, but how large do they have to be in order to feel a speed difference is the question ;)

Thanks guys!!!

Link to comment
Share on other sites

That works great, and is simpler to code that the string assembly/StringSplit() method. The only problem is it is darn slow. Every loop through the _ArrayDelete() function has to copy all the elements below the one deleted, one at a time, and then ReDim the array.

Shoot. You're right about that. I guess it depends on the context, really. If the script were parsing large files in succession over an extended period of time...yeah, mine is definitely crossing the finish line last. But if we're just firing off every once in a while, it may not be as noticeable.

Perhaps we could just solve the issue at the source and not import blank lines? :P

#include <Array.au3>

Func _FileReadToArray_NoBlanksBecauseThatIsJustAWasteOfTimeAndSpace( $path )
    If Not FileExists( $path ) then Return -1
    Local $strLine = "", $strLines = ""
    Local $hFile = FileOpen( $path, 0 )
    While 1
        $strline = FileReadLine($hFile)
        If @error = -1 Then ExitLoop
        If StringStripWS( $strline , 8) <> "" then $strLines &= $strline & Chr(1)
    Wend    
    FileClose( $hFile )

    Return StringSplit( StringTrimRight( $strLines, 1 ), Chr(1) )
EndFunc

$aTest = _FileReadToArray_NoBlanksBecauseThatIsJustAWasteOfTimeAndSpace( @ScriptDir & "\ReadMe.txt" )
_ArrayDisplay( $aTest, "the new text")

Edit: Updated code to remove unnecessary includes and to utilize PsaltyDS's suggestions in post#3.

Edited by zfisherdrums
Link to comment
Share on other sites

OMG you guys are good! :P

So basically I can go with zfisherdrums' "clean code" method. Or I can go with smashly's "faster processing" method...

Well, I've tested them both and they both do exactly what I need. And both are pretty fast too. I can't tell the difference in speed really. It's faster than the blink of an eye (though I am testing it with smaller files at the moment). It would be interesting to see how they handle large files, but how large do they have to be in order to feel a speed difference is the question ;)

Thanks guys!!!

Be advised that PsaltyDs's point may be a valid concern in your environment...namely speed of execution. His is definitely faster.

Glad we could help!

Zach...

Link to comment
Share on other sites

Be advised that PsaltyDs's point may be a valid concern in your environment...namely speed of execution. His is definitely faster.

...

Not so m8 .. your 2nd posted attempt of the function is faster..

If you do a TimerInit()/TimerDiff() around the various methods then yours wins hands down for speed..

Probably because your not calling to an #include udf function to read the file .

Nice work :P

Edited by smashly
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...