Sign in to follow this  
Followers 0
Jon

Need a few text files with unicode characters

12 posts in this topic

Hi,

I'm just adding support for UTF8 files with no BOM but I need some example utf8 files that have non ASCII (0-127) characters in them otherwise the code just assumes that they are ASCII. If someone has a few text files with those sorts of characters in then save them as UTF8+BOM and attach them here.

Share this post


Link to post
Share on other sites



For reasons I'm not quite sure of, it pleases me to be able to work with all the different sorts of text files and encodings and to have unicode support. Weird ;)

1 person likes this

Share this post


Link to post
Share on other sites

For reasons I'm not quite sure of, it pleases me to be able to work with all the different sorts of text files and encodings and to have unicode support. Weird ;)

You're certainly getting ambitious with the code introducing auto-detection logic into it.

Share this post


Link to post
Share on other sites

You're certainly getting ambitious with the code introducing auto-detection logic into it.

rawr.

Just doing detection for UTF8 with no BOM. Looks like every other editor I've looked expects a BOM to indicate UTF16 types. So that's what I'm doing as well. Apparently I could try and use IsTextUnicode() to detect BOM-less UTF16 formats but it's meant to be an epic fail function anyway and a little used file format.

Share this post


Link to post
Share on other sites

For reasons I'm not quite sure of, it pleases me to be able to work with all the different sorts of text files and encodings and to have unicode support. Weird ;)

That's kind of like a encoding fetish Jon.

Just doing detection for UTF8 with no BOM. Looks like every other editor I've looked expects a BOM to indicate UTF16 types. So that's what I'm doing as well. Apparently I could try and use IsTextUnicode() to detect BOM-less UTF16 formats but it's meant to be an epic fail function anyway and a little used file format.

So they suggest to use a function, which will do what it's supposed to do, but it also epically fails? Posted Image

Share this post


Link to post
Share on other sites

#10 ·  Posted (edited)

I know it's a topic of 4 years old, but I was playing with C++ and I wanted to add that Autoit, does not interpret correctly the unicode file without BOM, example text file ("a最" - binary = 61 00 00 67), I think the correct use of IsTextUnicode should be

const int ANSI = 0, UTF_8 = 1, UTF16_LE = 2, UTF16_BE = 3, UTF32_LE = 4, UTF32_BE = 5, SCSU = 6, UNKNOWN_ENCODING = 7;

int GetFileEncodingEx(LPWSTR pzSourseFile) {
    BYTE pBuffer[4]; DWORD dwBytesRead;
    //  Open input file for reading, existing file only.
    HANDLE hFile = CreateFileW(pzSourseFile, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);                              //  No attr. template
    if (hFile == INVALID_HANDLE_VALUE) {
        if (GetLastError() != ERROR_SHARING_VIOLATION) { return 0; }
        //Try FILE_SHARE_READ | FILE_SHARE_WRITE to avoid the ERROR_SHARING_VIOLATION ;
        hFile = CreateFileW(pzSourseFile, GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
        if (hFile == INVALID_HANDLE_VALUE) { return 0; }
    }
    if (!ReadFile(hFile, pBuffer, 4, &dwBytesRead, NULL)) { CloseHandle(hFile); return 0; }
    if (dwBytesRead < 2) { CloseHandle(hFile); return ANSI; }
    else if (*pBuffer == 0xFF) {
        if (pBuffer[1] == 0xFE) {
            if (dwBytesRead == 4 && pBuffer[2] == 0x00 && pBuffer[3] == 0x00) { CloseHandle(hFile); return UTF32_LE; }
            CloseHandle(hFile); return UTF16_LE;
        }
    }
    else if (*pBuffer == 0xFE) {
        if (pBuffer[1] == 0xFF) { CloseHandle(hFile); return UTF16_BE; }
    }
    else if (dwBytesRead > 2) {
        if (*pBuffer == 0xEF) {
            if (pBuffer[1] == 0xBB && pBuffer[2] == 0xBF) { CloseHandle(hFile); return UTF_8; }
        }
        else if (dwBytesRead == 4 && *pBuffer == 0x00 && pBuffer[1] == 0x00 && pBuffer[2] == 0xFE && pBuffer[3] == 0xFF) { CloseHandle(hFile); return UTF32_BE; }
    }
    LARGE_INTEGER liFileSize; INT iTextUnicode = IS_TEXT_UNICODE_UNICODE_MASK | IS_TEXT_UNICODE_REVERSE_MASK | IS_TEXT_UNICODE_NOT_ASCII_MASK;
    if (!GetFileSizeEx(hFile, &liFileSize) || liFileSize.QuadPart > 0xFFFFFFFF) { CloseHandle(hFile); return ANSI; /*ERROR*/ }
    LPBYTE lpBuffer = new BYTE[liFileSize.LowPart];
    ReadFile(hFile, lpBuffer, liFileSize.LowPart, &dwBytesRead, NULL);
    CloseHandle(hFile);
    IsTextUnicode(lpBuffer, dwBytesRead, &iTextUnicode);
    delete[] lpBuffer;
    if (!iTextUnicode) { return ANSI; }
    if ((iTextUnicode & IS_TEXT_UNICODE_REVERSE_MASK) == 0) { return UTF16_LE; }
    return UTF16_BE;
}

//or

int GetFileEncoding(HANDLE hFile) {
    BYTE pBuffer[4]; LONG dwHighPart = 0;
    DWORD dwBytesRead, dwLowPart = SetFilePointer(hFile, 0, &dwHighPart, FILE_CURRENT);
    SetFilePointer(hFile, 0, NULL, FILE_BEGIN);
    if (!ReadFile(hFile, pBuffer, 4, &dwBytesRead, NULL)) { SetFilePointer(hFile, dwLowPart, &dwHighPart, FILE_BEGIN); return ANSI; }
    SetFilePointer(hFile, dwLowPart, &dwHighPart, FILE_BEGIN);
    if (dwBytesRead > 1) {
        if (*pBuffer == 0xFF) {
            if (pBuffer[1] == 0xFE) {
                if (dwBytesRead == 4 && pBuffer[2] == 0x00 && pBuffer[3] == 0x00) { return UTF32_LE; }
                return UTF16_LE;
            }
        }
        else if (*pBuffer == 0xFE) {
            if (pBuffer[1] == 0xFF) { return UTF16_BE; }
        }
        else if (dwBytesRead > 2) {
            if (*pBuffer == 0xEF) {
                if (pBuffer[1] == 0xBB && pBuffer[2] == 0xBF) { return UTF_8; }
            }
            else if (dwBytesRead == 4 && *pBuffer == 0x00 && pBuffer[1] == 0x00 && pBuffer[2] == 0xFE && pBuffer[3] == 0xFF) {
                return UTF32_BE;
            }
        }
        LARGE_INTEGER liFileSize; INT iTextUnicode = IS_TEXT_UNICODE_UNICODE_MASK | IS_TEXT_UNICODE_REVERSE_MASK | IS_TEXT_UNICODE_NOT_ASCII_MASK;
        if (!GetFileSizeEx(hFile, &liFileSize) || liFileSize.QuadPart > 0xFFFFFFFF) { return ANSI; /*ERROR*/ }
        LPBYTE lpBuffer = new BYTE[liFileSize.LowPart];
        ReadFile(hFile, lpBuffer, liFileSize.LowPart, &dwBytesRead, NULL);
        IsTextUnicode(lpBuffer, dwBytesRead, &iTextUnicode);
        delete[] lpBuffer; SetFilePointer(hFile, dwLowPart, &dwHighPart, FILE_BEGIN);
        if (iTextUnicode) {
            if ((iTextUnicode & IS_TEXT_UNICODE_REVERSE_MASK) == 0) { return UTF16_LE; }
            return UTF16_BE;
        }
    }
    return ANSI;
}
Ciao. Edited by DXRW4E

apps-odrive.pngdrive_app_badge.png box-logo.png new_logo.png MEGA_Logo.png

Share this post


Link to post
Share on other sites

#11 ·  Posted (edited)

 

Autoit, does not interpret correctly the unicode file without BOM

How would you differentiate between "a最" - binary = 61 00 00 67 in UTF16-LE and "愀g" - binary = 61 00 00 67 in UTF16-BE?

I think it's what Jon refered to by saying:

 

but it's meant to be an epic fail function anyway

Edited by jchd

This wonderful site allows debugging and testing regular expressions (many flavors available). An absolute must have in your bookmarks.
Another excellent RegExp tutorial. Don't forget downloading your copy of up-to-date pcretest.exe and pcregrep.exe here
RegExp tutorial: enough to get started
PCRE v8.33 regexp documentation latest available release and currently implemented in AutoIt beta.

SQLitespeed is another feature-rich premier SQLite manager (includes import/export). Well worth a try.
SQLite Expert (freeware Personal Edition or payware Pro version) is a very useful SQLite database manager.
An excellent eBook covering almost every aspect of SQLite3: a must-read for anyone doing serious work.
SQL tutorial (covers "generic" SQL, but most of it applies to SQLite as well)
A work-in-progress SQLite3 tutorial. Don't miss other LxyzTHW pages!
SQLite official website with full documentation (may be newer than the SQLite library that comes standard with AutoIt)

Share this post


Link to post
Share on other sites

#12 ·  Posted (edited)

How would you differentiate between "a最" - binary = 61 00 00 67 in UTF16-LE and "愀g" - binary = 61 00 00 67 in UTF16-BE?

I do not do ??, but it does Microsoft

That is the maximum that can be done in this regard, the IsTextUnicode is a official function microsoft, is the best of the best even if you read on the web there are many complaints, but all notepad or text editor, refer and use the IsTextUnicode, in the official page of IsTextUnicode is everything well explained, microsoft himself says that in one point nothing is 100% suresave, but al notepad or text editor using the IsTextUnicode mod, so in general there some compatibility.

 

however no one uses the UTF16-BE, both in Windows or C ++ or other (Wide Character are UTF16-LE) so in general with unicode means the UTF16-LE, so the UTF16-LE takes precedence

Local $hFileOpen = FileOpen(@DesktopDir & "\text.txt", 26)
If $hFileOpen = -1 Then
    ;Error
Else
    FileWrite($hFileOpen, Binary("0x61000067"))
    FileClose($hFileOpen)
EndIf
and open the text.txt with notepad2.exe

 

or Windows NotePad

2ebfndk.jpg

Ciao.

Edited by DXRW4E
1 person likes this

apps-odrive.pngdrive_app_badge.png box-logo.png new_logo.png MEGA_Logo.png

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0