Sign in to follow this  
Followers 0
WaitingForZion

Alexi 1.0

6 posts in this topic

#1 ·  Posted (edited)

The AutoItObject team has been brilliantly made Object Oriented Programming available in AutoIt. Not too long after they were working on this, I tried to implement a lexical analyzer in AutoIt, but the code became unmanageable due to lack of decomposition and other reasons, so I gave up on it and just release the broken the code. However, with the addition of Object Oriented Programming in AutoIt, I was able to implement my library successfully. This new Object Oriented Library for tokenizing strings is called Alexi. Alexi does away with the need to implement the first step in processing structured language. Alexi itself lets you define token types and split up a string into tokens of those types. Since this is only the first version, there are certain limitations, so not every structure can be tokenized, but for me at least, this is a big step headed in that direction.

The library along with documentation and an example is in the zip file attached to this post.

Alexi.zip

Feedback is appreciated, and I implore you kindly to provide it.

Update: Scroll down to other post.

Edited by WaitingForZion

Spoiler

"This then is the message which we have heard of him, and declare unto you, that God is light, and in him is no darkness at all. If we say that we have fellowship with him, and walk in darkness, we lie, and do not the truth: But if we walk in the light, as he is in the light, we have fellowship one with another, and the blood of Jesus Christ his Son cleanseth us from all sin. If we say that we have no sin, we deceive ourselves, and the truth is not in us. If we confess our sins, he is faithful and just to forgive us our sins, and to cleanse us from all unrighteousness. If we say that we have not sinned, we make him a liar, and his word is not in us." (I John 1:5-10)

 

Share this post


Link to post
Share on other sites



Great!

I like it.


♡♡♡

.

eMyvnE

Share this post


Link to post
Share on other sites

It's totaly over my head as to what this actually does :mellow: , but ive been looking forward to seeing something done with AutoItObject to study :(


GDIPlusDispose - A modified version of GDIPlus that auto disposes of its own objects before shutdown of the Dll using the same function Syntax as the original.EzMySql UDF - Use MySql Databases with autoit with syntax similar to SQLite UDF.

Share this post


Link to post
Share on other sites

Nice to see something already written in AutoItObject :mellow:

Now go write that parser! I've heard Recursive Descent is a good approach.

If you wish to keep your code even more object oriented, then try replacing your arrays with a LinkedList object (available in autoit object dowload) :(


Broken link? PM me and I'll send you the file!

Share this post


Link to post
Share on other sites

This past week I was not in a place where I could work on my lexer, but for the first few days I was filled with ideas on how to improve it. Sad to say, I have only made one minor update to the library. In the earlier version, the _StringTokenize function would simply ignore all occurrences of characters not associated with a token definition. I understand that in many cases it may not be desirable to allow these occurrences. To remedy this problem, I have merely added a parameter and a few more lines of code allowing you to choose whether you want the function to terminate with an error when this happens.

It should be pretty easy to break up all kinds of language structures. Limitations still remain, but they can be worked around.

The only severe limitation I can think of, besides the speed limitation that come with being implemented in a scripting language, is the inability to recognize tokens made of characters whose types vary according to order, and tokens made of differing literal characters. There is a simple remedy for the latter. Simply included all possible characters in the token definition, and then after they have been recognize, process them further by filtering.

So instead of defining a separate token type for each operator when you have operators represented by more than one character, simply define one token type for all operator characters. After you have your list of tokens, merely filter those tokens identified as the given token type.

I'm really sorry for this inconvenience. To make the necessary improvements, I will have to make another implementation taking these things into mind.

Update: Alexi 1.0.1.zip

Now, does anybody have any ideas or suggestions? Or does anyone have feedback?


Spoiler

"This then is the message which we have heard of him, and declare unto you, that God is light, and in him is no darkness at all. If we say that we have fellowship with him, and walk in darkness, we lie, and do not the truth: But if we walk in the light, as he is in the light, we have fellowship one with another, and the blood of Jesus Christ his Son cleanseth us from all sin. If we say that we have no sin, we deceive ourselves, and the truth is not in us. If we confess our sins, he is faithful and just to forgive us our sins, and to cleanse us from all unrighteousness. If we say that we have not sinned, we make him a liar, and his word is not in us." (I John 1:5-10)

 

Share this post


Link to post
Share on other sites

line 104 in Alexi.au3 throws an error because line 102 is missing "... UBound($aTokenDefs)-1

because of missing comments i cant debug further.

try to tokenize a complete autoit-script, not just one line of pseudo-code.

maybe then you will find the bugs.


AutoIt-Syntaxsheme for Proton & Phase5 * Firefox Addons by me (resizable Textarea 0.1d) (docked JS-Console 0.1.1)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0