-
-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add back character sets that had characters outside 16 bit plane #1964
base: master
Are you sure you want to change the base?
Conversation
Update title line
Based on @hjellinek suggestion, I did some timing tests comparing the original table format with formats that uses the same top-level array but use either a hashtable or a digital search for the second level. The hash and digital search were both better than what I had before, so I simplified the code to the hash. (I might eventually go to the digital search, but I would first have to move by MULTI-ALIST macros over to Lispusers). I also added a new format, :UTF-8-SLUG just like LUTF-8 except that its OUTCHARFN produces the unicode slug for codes whose mappings were not found in the table-files. And new functions XTOUCODE? and UTOXCODE? that return the corresponding mapping for codes in the table-files, NIL otherwise. If multiple XCCS codes map to the same Unicode, the normal UNICODE.TRANSLATE (and XTOUCODE) will return the lowest Unicode. But XTOUCODE? will return the list--caller has to decide what to do. Alternatives in the inverse direction behave in the same way. Note that callers of UNICODE.TRANSLATE must be recompiled. Please test this functionality/interface. Also I hope that the previously reported performance issues have been fixed. |
I did some timing using only a single global hash array for all the characters, that is at least as fast, maybe faster, then doing an initial array branch into smaller hash arrays. And simpler still. |
Thanks, @rmkaplan, for the new functionality. The increased speed is a bonus. I'm glad my suggestion worked out so well. |
I did some more careful speed testing with mapping tables that contained all of the X-to-U pairs, not just the common ones, and with looking up all of the possible codes, not just charset 0. The single-hash was a significant loser with the much larger mappings, by a factor of 6. So I reverted to a top-level branch to hash arrays that contain no more than 128 characters. The multi-alist is slightly better than the multi-hash for a 512 branching array, but significantly better (~ 25%) with a 1024 branch. But I'll stick with the hash for now. |
I reworked the UNICODE.TRANSLATE macro so that it could be shared by XTOUCODE and XTOUCODE? etc. This should not be called directly by functions outside of UNICODE, to avoid dependencies on internal structures. Use the XTOUCODE etc. function interface. |
I'm testing it now. For whatever reason, my I did a spot test with Runic. XCCS defines characters in several Runic variants, and, as I just learned with the help of the new APIs, Unicode seems only to define characters in a single Runic script. I guessed that there's an invariant such that, given an open output stream However, instead of Here's a screenshot from Chrome: |
On Jan 20, 2025, at 4:17 PM, Herb Jellinek ***@***.***> wrote:
I guessed that there's an invariant such that, given an open output stream STREAM with format set to :UTF-8-SLUG, it is the case that:
for all X such that (XTOUCODE? X) returns NIL, (\OUTCHAR STREAM X) should write the Unicode slug -- REPLACEMENT CHARACTER U+FFFD (�) -- to the output stream STREAM.
However, instead of REPLACEMENT CHARACTER U+FFFD (�) I see U+E000, which is the initial codepoint of the Unicode private use area. Does this mean that the :UTF-8-SLUG format is acting like the :UTF-8 format, adding to the unmapped character table instead of outputting slugs?
Is this the right logic? If an XCODE doesn’t have a true (unfaked map), then call the user outcharfn giving it the slug code, but forcing the RAW flag to T. That suppresses the call to UNICODE.TRANSLATE.
(LET ((UCODE (XTOUCODE? XCCSCODE)))
(CL:IF UCODE
(UTF8.OUTCHARFN STREAM UCODE RAW)
(UTF8.OUTCHARFN STREAM (CONSTANT (HEXNUM? "FFFD")) T)))
(Do you also want a separate raw-slug format, where the caller passes RAW=T? That would just convert the given code to utf-8 bytes without ever trying to map it.)
|
Yes, that's the right logic. (I didn't know about
Hmm, that's tempting. What would that look like from the client point of view? At the moment How would I need to change my code to work as you describe? Would I open the |
I now remember why I set up the raw formats, I was anticipating an improbable future in which we have switched the internal encoding from XCCS to Unicode. So there would be no code translation either in or out, just conversion to and from the proper sequence of file bytes. On that view, there should probably also be a raw slug format.
… On Jan 21, 2025, at 11:04 AM, Herb Jellinek ***@***.***> wrote:
On Jan 20, 2025, at 4:17 PM, Herb Jellinek @.***> wrote: I guessed that there's an invariant such that, given an open output stream STREAM with format set to :UTF-8-SLUG, it is the case that: for all X such that (XTOUCODE? X) returns NIL, (\OUTCHAR STREAM X) should write the Unicode slug -- REPLACEMENT CHARACTER U+FFFD (�) -- to the output stream STREAM. However, instead of REPLACEMENT CHARACTER U+FFFD (�) I see U+E000, which is the initial codepoint of the Unicode private use area. Does this mean that the :UTF-8-SLUG format is acting like the :UTF-8 format, adding to the unmapped character table instead of outputting slugs?
Is this the right logic? If an XCODE doesn’t have a true (unfaked map), then call the user outcharfn giving it the slug code, but forcing the RAW flag to T. That suppresses the call to UNICODE.TRANSLATE. (LET ((UCODE (XTOUCODE? XCCSCODE))) (CL:IF UCODE (UTF8.OUTCHARFN STREAM UCODE RAW) (UTF8.OUTCHARFN STREAM (CONSTANT (HEXNUM? "FFFD")) T)))
Yes, that's the right logic. (I didn't know about HEXNUM?, which could make some of my code a lot easier to read.)
(Do you also want a separate raw-slug format, where the caller passes RAW=T? That would just convert the given code to utf-8 bytes without ever trying to map it.)
Hmm, that's tempting. What would that look like from the client point of view? At the moment OPENHTMLSTREAM opens an underlying output stream (BACKING) with FORMAT :UTH-8-SLUG, and \HTML.OUTCHARFN calls plain old \OUTCHAR to write to BACKING.
How would I need to change my code to work as you describe? Would I open the BACKING stream with a different FORMAT and have my OUTCHARFN call some alternative to \OUTCHAR, like UTF8.OUTCHARFN you mentioned above?
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJLZKAYY4CED2X7JUET2L2K5FAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMBVGUZDINRQGM>.
You are receiving this because you were mentioned.
|
I noticed something funny when testing XCCS charset 0xEB, General and Technical Symbols. My test code writes a single XCCS character to the UTF-8 backing stream and expects to see its Unicode equivalent or the Writing any of the characters in the range
Interestingly,
|
I noticed the same thing partway through a sample of charset 238:
|
I also discovered this last night, there was a missing remove-duplicates on the file names when it was building the UNICODE.MAPPINGS file. I’m putting a duplicate check in the actual code-insert loop.
… On Jan 22, 2025, at 1:59 PM, Herb Jellinek ***@***.***> wrote:
I noticed something funny when testing XCCS charset 0xEB, General and Technical Symbols.
My test code writes a single XCCS character to the UTF-8 backing stream and expects to see its Unicode equivalent or the REPLACEMENT CHARACTER, 0xFFFD. And that's the case with all of my other tests until this one.
Writing any of the characters in the range 0xEB21 - 0xEB2B outputs a duplicate Unicode character except for 0xEB27:
Code 0xEB21 = ℙℙ
Code 0xEB22 = ℋℋ
Code 0xEB23 = ℐℐ
Code 0xEB24 = ≋≋
Code 0xEB25 = ⊜⊜
Code 0xEB26 = ℇℇ
Code 0xEB27 = ̲̲
Code 0xEB28 = ‽‽
Code 0xEB29 = ⌘⌘
Code 0xEB2A = �
Code 0xEB2B = ℌℌ
Interestingly, XTOUCODE? returns duplicate results for all of these but 0xEB2A GROUP:
_ (FOR X FROM #xEB21 TO #xEB2B DO (CL:FORMAT T "XCCS 0x~4,,'0x = ~A~%" X
(XTOUCODE? X)))
XCCS 0xEB21 = (8473 8473)
XCCS 0xEB22 = (8459 8459)
XCCS 0xEB23 = (8464 8464)
XCCS 0xEB24 = (8779 8779)
XCCS 0xEB25 = (8860 8860)
XCCS 0xEB26 = (8455 8455)
XCCS 0xEB27 = (818 818)
XCCS 0xEB28 = (8253 8253)
XCCS 0xEB29 = (8984 8984)
XCCS 0xEB2A = NIL
XCCS 0xEB2B = (8460 8460)
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJOU4KWZRQKOBJK343L2MAIEHAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMBYGM2TOMBYGI>.
You are receiving this because you were mentioned.
|
Numbers are numbers internally. My code is just displaying in octal; radix doesn't matter. (PROGN (DRIBBLE "CSets2-rmk55.txt")(SETQ CSETS2 (U-TO-XCHARSET2 T))(DRIBBLE)) After that I repeated So, it appears that the Unicode to XCCS table is getting corrupted pretty early when probing all 16-bit Unicode values. (Or the mapping files have errors that somehow clobber the initial state of the table.) |
In a simple test (collect all the values for ucodes from 0 to 255 twice and compare the differences) it looks like a small number of values are showing up as (CONS X) instead of just X the second time. But the actual code numbers are the same.
… On Jan 25, 2025, at 4:45 PM, Matt Heffron ***@***.***> wrote:
In the mapping tables there is a correspondence between X code x2336 (= 9014) and U code x0306 (= 774). Maybe there is a confusion between hex and octal?
Numbers are numbers internally. My code is just displaying in octal; radix doesn't matter.
I just rebuilt the loadups again for rmk55.
I entered (UTOXCODE? 198) and got 225. Likewise for 186 got 235.
Then (in XCL Exec) I ran the
(PROGN (DRIBBLE "CSets2-rmk55.txt")(SETQ CSETS2 (U-TO-XCHARSET2 T))(DRIBBLE))
After that I repeated (UTOXCODE? 198) and got (774). (UTOXCODE? 186) got (8221).
Notice that this time the singleton values were returned in lists (of 1 item)!
So, it appears that the Unicode to XCCS table is getting corrupted pretty early when probing all 16-bit Unicode values. (Or the mapping files have errors that somehow clobber the initial state of the table.)
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJNT5VSIAHJ2DLVRWO32MQV3RAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMJUGE2TIMRXHE>.
You are receiving this because you were mentioned.
|
That's not what I'm seeing. (DEFUN U-TO-XCHARSET3 (OUTFILEPATH &AUX XCODE) (IL:* IL:\; "Edited 25-Jan-2025 20:04 by mth")
(WITH-OPEN-STREAM (OUT (OPEN OUTFILEPATH :DIRECTION :OUTPUT :IF-EXISTS :NEW-VERSION))
(LOOP :FOR UC :FROM 0 :TO 255 :NCONC
(UNLESS (NULL (SETQ XCODE (IL:UTOXCODE? UC)))
(SETQ XCODE (IL:MKLIST XCODE))
(LOOP :FOR XC :IN XCODE :COLLECT (PROGN (FORMAT OUT
"Unicode: U+~4,'0X (~D) => #x~4,'0X (~D)~%"
UC UC XC XC)
(CONS UC XC))))))) Called as |
So, I wrote this to find which Unicode value passed to (DEFUN U-TO-XCHARSET4 (TEST-PAIRS &AUX XCODE FAILED) (IL:* IL:\; "Edited 25-Jan-2025 21:02 by mth")
(LOOP :FOR UC :FROM 0 :TO 255 :DO
(UNLESS (NULL (SETQ XCODE (IL:UTOXCODE? UC)))
(UNLESS (OR FAILED (LOOP :FOR TP :IN TEST-PAIRS :ALWAYS
(EQUAL (IL:UTOXCODE? (CAR TP))
(CDR TP))))
(FORMAT T
"Test fails after probing Unicode: U+~4,'0X (~D)~%"
UC UC)
(SETQ FAILED T))
(SETQ XCODE (IL:MKLIST XCODE))
(LOOP :FOR XC :IN XCODE :COLLECT
(PROGN (FORMAT T
"Unicode: U+~4,'0X (~D) => #x~4,'0X (~D)~%"
UC UC XC XC)
(CONS UC XC)))))) And called it as: |
Could we be seeing unhandled/unexpected hash table collisions? |
Below is my simple test function. It returns a list of 132 mismatches, basically one each for the ascii codes plus a few others. Most of the discrepancies have the same values, except that a CONS is returned on the second pass. But for a few a value got added during the first pass that showed up on the second. I have an inkling of some of what's going on, but I have to look further. The tables are initialized with a default collection of XCCS character sets with their mappings to Unicode. So in character set 0 XCCS code x0063 (= 99 = c) maps to 99, as would be expected for Ascii. And Unicode 99 maps back to XCCS 99, if only character set 0 is involved. However, XCCS character xE2D6 (=58072 in character-set 343, IPA) also has x0063 has its corresponding Unicode. But XCCS character-set 343 isn't loaded in the initial set, and it's only when you have later asked for characters in 343 that that mapping gets installed. After that, when you ask for the XCCS codes corresponding to Unicode x63 (99), you get both 99 and 58072. So I think that the problem of seeing one value first and 2 values later is because the initial inverted mapping is still not correct. I don't yet know why the other ascii entries get an extra CONS the second time. (LAMBDA NIL |
Last week I wrote a short function that goes in the other direction, calling I thought nothing of it - I just changed my code to adapt - but reading @MattHeffron's |
It should now be the case that XTOUCODE and UTOXCODE always return SMALLP characters (possibly faked), and XTOUCODE? and UTOXCODE? return SMALLP's for singletons, lists for alternatives, and NIL if nonexistent. I still haven't worked out the back-and-forth logic for keeping tables in both directions complete and consistent for incremental on-demand updates. So this version creates the tables on load up for all possible character sets (including Japanese) instead of the much smaller number of default sets. So instead of hash arrays of size about 1.5K, they are about 12K, most of which would never be used. But I hope this now gives the behavior you expect. |
I pulled the latest changes to this PR and built a new loadup. I performed two tests. (1) I wrote a quickie function that applies
(2) I regenerated my static JavaScript XCCS-to-Unicode mapping table. It's byte-for-byte identical to the one I created prior to pulling the latest changes. |
105 is correct. I had an extra file name, because I didn’t ask for only the highest version. I’ll fix that and update again.
… On Jan 27, 2025, at 3:17 PM, Herb Jellinek ***@***.***> wrote:
I pulled the latest changes to this PR and built a new loadup. I performed two tests.
(1) I wrote a quickie function that applies XTOUCODE? to every possible 16-bit integer and records the charsets of the valid XCCS codes. It returns a list of 105 character sets:
(0 33 34 35 36 37 38 39 40 41 42 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106
107 108 109 110 111 112 113 114 115 116 117 118 224 225 226 227 228 229 230 231 232 235 236 237 238 239 240 241
242 243 244 245 253)
(2) I regenerated my static JavaScript XCCS-to-Unicode mapping table. It's byte-for-byte identical to the one I created prior to pulling the latest changes.
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJPPULXLM4JLKYKUXEL2M25AHAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMJXGEYDIMRYGA>.
You are receiving this because you were mentioned.
|
A minor correction to my previous comment: (2) I regenerated my static JavaScript XCCS-to-Unicode mapping table and found a small block of changes in the Unicode characters that correspond to the XCCS codes from
Previous
Now
|
Interesting.
It turns out that those XCCS codes appear in 2 Japanese character sets, 164 and 165, with different corresponding unicodes.
Hard to say whether that is an error in the tables (in which case, which is correct?), or whether the claim is false that the tables in the X-to-U direction are functional (in which case that assumption should be removed from the code so the lookup would give you a list of 2 alternative unicodes).
We would need more Japanese expertise to figure this out. In the meantime, I think whatever mapping the current code picks out is good enough.
… On Jan 28, 2025, at 11:05 AM, Herb Jellinek ***@***.***> wrote:
A minor correction to my previous comment <#1964 (comment)>:
(2) I regenerated my static JavaScript XCCS-to-Unicode mapping table and found a small block of changes in the Unicode characters that correspond to the XCCS codes from 0x7521 through 0x7526.
t is a table that maps XCCS codes to Unicode.
Previous
// t[XCCS] = Unicode;
t[0x7521] = 0x5B57;
t[0x7522] = 0x69C7;
t[0x7523] = 0x9059;
t[0x7524] = 0x7464;
t[0x7525] = 0x8655;
t[0x7526] = 0x76F8;
Now
// t[XCCS] = Unicode;
t[0x7521] = 0x582F;
t[0x7522] = 0x600E;
t[0x7523] = 0x5FEB;
t[0x7524] = 0x5E2B;
t[0x7525] = 0x51DC;
t[0x7526] = 0x7199;
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJOEPYPVLCBQN77ZVPD2M7IHJAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMJZHAZTQNJTGI>.
You are receiving this because you were mentioned.
|
"I think whatever mapping the current code picks out is good enough." I agree. It would be cool to be able to ask the creators of XCCS questions like these. The evolution of XCCS into Unicode is a chunk of CS history that may be in danger of being lost. |
I poked around a bit more. I think that the table we have for charset 164 is goofy, because it includes mappings whose XCCS codes are actually in 165. That can’t possibly be correct.
I don’t remember the provenance of all the Japanese tables, Peter Cravens filled in a lot in the last round.
I can probably clean up some of this by going back and forth between the images in the XCCS document and the images in my big Unicode book. Maybe there was just a wholesale translation. But for sure the 165 mappings don’t belong in 164.
… On Jan 28, 2025, at 12:04 PM, Herb Jellinek ***@***.***> wrote:
Interesting. It turns out that those XCCS codes appear in 2 Japanese character sets, 164 and 165, with different corresponding unicodes. Hard to say whether that is an error in the tables (in which case, which is correct?), or whether the claim is false that the tables in the X-to-U direction are functional (in which case that assumption should be removed from the code so the lookup would give you a list of 2 alternative unicodes). We would need more Japanese expertise to figure this out. In the meantime, I think whatever mapping the current code picks out is good enough.
"I think whatever mapping the current code picks out is good enough." I agree.
It would be cool to be able to ask the creators of XCCS questions like these. The evolution of XCCS into Unicode is a chunk of CS history that may be in danger of being lost.
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJM7UGSCIQ5Y2C32QJT2M7PDHAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMJZHE2DSMBYGA>.
You are receiving this because you were mentioned.
|
This goes back to the non-SMALLP Gothic characters that kicked off these issues. Previously I had suppressed the READ-LINE error of trying to deal with large characters by commenting out those lines so that they were skipped over without reading. But I now realize that the reading problem was coming from the fact that the UTF-8 encodings of the actual Unicode characters appeared as comments on the mapping lines. This was done to make the characters interpretable as their glyphs in an out-of-medley Unicode editor. But there is no reason for Medley to interpret the encoding of those characters when building the mapping tables, and all of the other mapping information is in simple Ascii. So I changed it so that the mapping files are opened with external-format :THROUGH instead of :UTF-8, and the character bytes are simply ignored. So I have uncommented the lines that I previously commented out, so the tables are complete again. Codes bigger than 65535 are now ignored when the internal tables are built, just like the combining-character mappings are ignored at that point. (Note a remaining separate problem: The mappings in Japanese character sets 164 and 165 only partially match what appears in our XCCS standard document.) |
I pulled these changes and regenerated my static XCCS → Unicode table. Puzzlingly, |
Look at the mapping file for character set 51 (Rune/Gothic). There should be no commented lines at the bottom now, but there were before.
… On Feb 1, 2025, at 10:52 AM, Herb Jellinek ***@***.***> wrote:
I pulled these changes and regenerated my static XCCS → Unicode table. Puzzlingly, git diff doesn't see any difference between the latest table and the previous version. I could have screwed something up, but I can't think what.
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJPBXVJXZYIUYTAZZU32NUJYBAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRZGA3DQNJXGE>.
You are receiving this because you were mentioned.
|
OK, I see them there. It's possible my code only handles 16-bit Unicode values. I'll have to investigate. |
What does The table says I didn't test the values in between. |
On this round, the non-smallp codes are being suppressed further along in the pipeline. The input data is now complete again (including the UNICODE and INVERSE-UNICODE mapping files), but the non-smallp codes are not being inserted into the internal tables. So code at your level won’t see them.
I’m a little timid about this, that something will break if there is an attempt to create a string or do EQ testing on these large Unicode codes.
But I could be more aggressive and remove that check, leaving it to the future to discover and fix any problems.
… On Feb 1, 2025, at 11:06 AM, Herb Jellinek ***@***.***> wrote:
OK, I see them there. It's possible my code only handles 16-bit Unicode values. I'll have to investigate.
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJODJVEHTC3FT5I7MOL2NULNHAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRZGA3TGNRXGY>.
You are receiving this because you were mentioned.
|
This is the current behavior—as noted, I read those mappings from the files but then suppress them. Should I throw them back in?
… On Feb 1, 2025, at 11:12 AM, Herb Jellinek ***@***.***> wrote:
What does XTOUCODE? return when you run it?
The table says (XTOUCODE? #x29E1) should return U+10330, but it returns NIL for me.
Likewise 0x29FC returns NIL.
I didn't test the values in between.
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJOYBLUEW5ZVQMQGWWL2NUMAFAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRZGA3TKMZVG4>.
You are receiving this because you were mentioned.
|
No, it's not necessary for what I'm doing, and would cause me to do additional downstream work as well: for instance, I have some I just wanted to make sure your commit got tested. I have some horrendous font metrics regression I need to figure out before I do anything else. |
On reflection, the EQ testing would (mostly) work, since the same FIXP would be retrieved on each call.
If I relax this, then we could create mapping table for emoji’s, basically allocating an unused part of the XCCS space. They would come in and go out, and even display if we had a font for them.
This is x1F600:
😀
… On Feb 1, 2025, at 11:27 AM, Herb Jellinek ***@***.***> wrote:
This is the current behavior—as noted, I read those mappings from the files but then suppress them. Should I throw them back in?
No, it's not necessary for what I'm doing, and would cause me to do additional downstream work as well: for instance, I have some CL:FORMAT calls that assume some values will fit in 4 hex digits.
I just wanted to make sure your commit got tested.
I have some horrendous font metrics regression I need to figure out before I do anything else.
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJMIDQ6EDSQ43FWPTE32NUN27AVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRZGA4DANRSGI>.
You are receiving this because you were mentioned.
|
If one knows it might be FIXP but not SMALLP, shouldn't one be using EQP instead of EQ? |
Well, the problem is that one didn’t know that, and there is code all over the place that assumes that you can check NTHCHARCODE with EQ and MEMB. Existing code probably wasn’t meant to work on emojis or gothic, so maybe no one would notice.
… On Feb 1, 2025, at 11:56 AM, Nick Briggs ***@***.***> wrote:
If one knows it might be FIXP but not SMALLP, shouldn't one be using EQP instead of EQ?
—
Reply to this email directly, view it on GitHub <#1964 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AQSTUJNHBGHYXESCYBZAMYL2NURGRAVCNFSM6AAAAABVDD6UISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRZGA4DSNRZHE>.
You are receiving this because you were mentioned.
|
In its present state, this is working correctly for me. (I.e., with the full table loaded.) |
Some of the mappings had Unicodes outside of 16 bits, those character sets had been excluded before.
Now the character sets are included, but those particular lines in the mapping file (e.g. the Gothic characters in the Runic-Gothic character set) have been commented out, so that the other characters can be included.