Math in Hex() function

Recommended Posts

I ran into a strange issue today while updating one of my scripts. When I tried to do some simple division on a variable and then convert it to hex, I do not get the expected value returned.

An example:

```\$Size = 128*2
ConsoleWrite('128*2 = '&\$Size&@CRLF)
ConsoleWrite('Hex() returns: '&hex(\$Size)&@CRLF)

\$Size = 256
ConsoleWrite('256 = '&\$Size&@CRLF)
ConsoleWrite('Hex() returns: '&hex(\$Size)&@CRLF)
\$Size = 512/2
ConsoleWrite('512/2 = '&\$Size&@CRLF)
ConsoleWrite('Hex() returns: '&hex(\$Size)&@CRLF)```

Returns

```128*2 = 256
Hex() returns: 00000100
256 = 256
Hex() returns: 00000100
512/2 = 256
Hex() returns: 4070000000000000```

I am running version: 3.3.8.1

While I can work around this issue in the script, I would like to understand better what is going on.

Thank you,

Kerros

Kerros===============================================================How to learn scripting: Figure out enough to be dangerous, then ask for assistance.

Share on other sites

There have been changes to Hex()

It internally detects doubles, So my guess would be that the result of a division will be classed as a double.

Edited by JohnOne

Monkey's are, like, natures humans.

Share on other sites

Thanks John that was the insight that I was needing. While I'm doing error checking earlier in the script to verify that the value is an int as I expect. I was not doing so right before sending to the hex function.

I tried doing the hex function on a double to verify your thought and I did get the expected result

```ConsoleWrite('Hex on a double: '&hex(52.5)&@CRLF)
Output:
Hex on a double: 404A400000000000```

Posting an example of the solution

```\$Size = Int(512/2)
ConsoleWrite('512/2 = '&\$Size&@CRLF)
ConsoleWrite('Hex() returns: '&hex(\$Size)&@CRLF)

Output

Int(512/2) = 256
Hex() returns: 00000100```

Kerros

Kerros===============================================================How to learn scripting: Figure out enough to be dangerous, then ask for assistance.

Create an account

Register a new account

• Similar Content

• Hi AutoIt programmers, excuse me for bothering you with multiple topics.

In AutoIt we can use Number() function to convert Hex string to number but it's output is different of C# output & and i wanna make it's output like AutoIt code.

For e.g I use this in AutoIt:
Local \$dBinary = Binary("Hello") ; Create binary data from a string. Local \$dExtract = Number(BinaryMid(\$dBinary, 1, 5)) ConsoleWrite(\$dExtract & @CRLF) And i use this for C#:
using System; using System.Text; //NameSpace Is Use of Project Name namespace TEST { class Program { public static void Main(string[] args) { //declaring a variable and assigning hex value string dd = ToHex("Hello", Encoding.ASCII); decimal d = Int64.Parse(dd, System.Globalization.NumberStyles.HexNumber); Console.WriteLine("Result: " + d); //hit ENTER to exit Console.ReadLine(); } public static string ToHex(string sInput, Encoding oEncoding, bool b0x_Prefix = false) { byte[] a_binaryOutput = oEncoding.GetBytes(sInput); string sOutput = BitConverter.ToString(a_binaryOutput).Replace("-", ""); if (b0x_Prefix == false) { return sOutput; } else { return "0x" + sOutput; } } } }
I say once again that excuse me for creating new topic, in fact i'm making a library for GAuthOTP from a topic in AutoIt.
• By jmor
For some reason, I can't convert to Hex a Decimal color that has been calculated.
For example, this works:
Local \$iColor = 3310546 ConsoleWrite("\$iColor (dec): " & \$iColor & ", \$iColor (hex): " & Hex(\$iColor, 6) & @CRLF) giving the expected output:
\$iColor (dec): 3310546, \$iColor (hex): 3283D2
But if I want to take an average of several colors, I have a problem.  For example this doesn't work:
\$iColor = (3310546 + 3310546) / 2 ConsoleWrite("\$iColor (dec): " & \$iColor & ", \$iColor (hex): " & Hex(\$iColor, 6) & @CRLF) giving the wrong output:
\$iColor (dec): 3310546, \$iColor (hex): 000000
I observe this behavior after any processing of a decimal color.
Is this a bug?

• Hey, all.
I've been looking for a way to change cursor colour but not the cursor itself. I've been looking for a couple hours now and can't find anything.  I also don't even know where to start, if anyone has any tips or examples please comment them.
Thanks
• By lbsl

I'm trying to figure out a method to translate one or multiple USHORT values into one large bitfield and i am slowly getting a headache figuring out an effective way to perform this.

In general
The bitfield tells my routine which columns entities are *not* present in a record in the database structure i am reading (The so-called NULL values).
This also means that the byte-block beloging to that field, is not present within that file. This is a very simple form of low-level database compression technique.
I have several different database files, and they contain different amounts of columns with various mixed data-type forms.

The logic of the bit-range is that for each group of columns, one bitfield is stuffed into a USHORT, once the boundaries of the lower logic are reached, the bitfield continues in a next set of a USHORT value.
The USHORT blocks are stored in low to high order order inside the file and have to be swapped to get the correct binary representation.

Below snippet displays the details of the structure, starting with the (in this case) three USHORT bitfields.
The next range of figures is the binary representation of these bitfields *when* shifted (i have done this all manually) and combined in the proper order.
Some of the bitranges are between parentheses: when a column represents a char range, the database structure has two types of bit definitions:odd (01) for numeric fields and even (10) for char fields.

bit1 bit2 bit3 bit-range representation of bitfields 3-1 Col-Title column-type 0100 0000 0000 00000000000000000000000000000000000000000001 Column1 (DWORD) 0800 0000 0000 (00000000000000000000000000000000000000001000)Column2 (CHAR) 2000 0000 0000 (00000000000000000000000000000000000000100000)Column3 (CHAR) 8000 0000 0000 (00000000000000000000000000000000000010000000)Column4 (CHAR) 0002 0000 0000 (00000000000000000000000000000000001000000000)Column5 (CHAR) 0004 0000 0000 00000000000000000000000000000000010000000000 Column6 (DWORD) 0010 0000 0000 00000000000000000000000000000001000000000000 Column7 (DWORD) 0040 0000 0000 00000000000000000000000000000100000000000000 Column8 (DWORD) 0000 0100 0000 00000000000000000000000000010000000000000000 Column9 (DWORD) 0000 0400 0000 00000000000000000000000001000000000000000000 Columna (DWORD) 0000 1000 0000 00000000000000000000000100000000000000000000 Columnb (BOOLEAN) 0000 4000 0000 00000000000000000000010000000000000000000000 Columnc (BOOLEAN) 0000 0001 0000 00000000000000000001000000000000000000000000 Columnd (BOOLEAN) 0000 0004 0000 00000000000000000100000000000000000000000000 Columne (BOOLEAN) 0000 0010 0000 00000000000000010000000000000000000000000000 Columnf (BOOLEAN) 0000 0040 0000 00000000000001000000000000000000000000000000 Column10 (DWORD) 0000 0000 0100 00000000000100000000000000000000000000000000 Column11 (DWORD) 0000 0000 0800 (00000000100000000000000000000000000000000000)Column12 (CHAR) 0000 0000 1000 00000001000000000000000000000000000000000000 Column13 (DWORD) 0000 0000 8000 (00001000000000000000000000000000000000000000)Column14 (CHAR) 0000 0000 0002 00100000000000000000000000000000000000000000 Column15 (CHAR) Each database does have a fixed column definition setting, but the order and field-types (char/numeric) within that table can vary per database file.

So figuring out which field is or isn´t present without necessarily knowing it is a char/type or numeric i am using a BitAND() construction with bit pairs of "11" which are in the order of 0x03, 0x0c, 0x30, 0xc0, 0x300, 0xc00, etc.
Regardless of the field type, i would get in that case always a difference if this field is marked as "nul"
My issue
I do know my way around reading 16-bit binaries and getting past the "no-consecutive 32-bit datablocks" issue when using WINAPI_ReadFile() but regarding how to transform multiple USHORT values into one large bitfield using bitshifting methods: I could use some crash course here since depending on the column number i have to construct either a USHORT, ULONG or even a 64/bit hex value...
I know WINAPI_ReadFile() performs the endian correction for the USHORT value, but when i have multiple of these pairs, i need to perform some multiplications (with risk of crashing the program by exceeding  a variable's type value limitation) to add these numbers up and i have the feeling that using bitshifting, works faster and more effective but....
What is the most effective way to combine these USHORT values into one ULONG/SYSTEMLong value?

• By rootx
I need help to read one file in a HEX mode and go to one offset and than read the value. THX

×
×
• Create New...