You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I attempt to interrogate the following image, I get the 'charmap' codec can't encode characters in position 102-104: character maps to <undefined> error.
ComfyUI Error Report
Error Details
Node ID: 3
Node Type: ShowTextForGPT
Exception Type: UnicodeEncodeError
Exception Message: 'charmap' codec can't encode characters in position 102-104: character maps to
Stack Trace
File "E:\AI\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\AI\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\AI\ComfyUI\execution.py", line 168, in _map_node_over_list
process_inputs(input_data_all, 0, input_is_list=input_is_list)
File "E:\AI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\nodes\ChatGPT.py", line 696, in run
file.write(t)
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
2025-02-02T16:23:54.633730 - TripoSR.available
2025-02-02T16:23:54.643371 - MiniCPMNode.available
2025-02-02T16:23:54.696151 - Scenedetect.available
2025-02-02T16:23:54.725796 - FishSpeech.available False
2025-02-02T16:23:54.772044 - SenseVoice.available
2025-02-02T16:23:55.019823 - Whisper.available False
2025-02-02T16:23:55.032090 - Installing fal-client...2025-02-02T16:23:55.032090 -
2025-02-02T16:23:59.010054 - FalVideo.available
2025-02-02T16:23:59.011070 - �[93m -------------- �[0m
2025-02-02T16:23:59.034767 - ------------------------------------------2025-02-02T16:23:59.035795 -
2025-02-02T16:23:59.035795 - �[34mComfyroll Studio v1.76 : �[92m 175 Nodes Loaded�[0m2025-02-02T16:23:59.035795 -
2025-02-02T16:23:59.035795 - ------------------------------------------2025-02-02T16:23:59.035795 -
2025-02-02T16:23:59.035795 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-02-02T16:23:59.035795 -
2025-02-02T16:23:59.035795 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-02-02T16:23:59.035795 -
2025-02-02T16:23:59.035795 - ------------------------------------------2025-02-02T16:23:59.035795 -
2025-02-02T16:23:59.041880 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: E:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts�[0m
2025-02-02T16:23:59.041880 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False�[0m
2025-02-02T16:23:59.042892 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']�[0m
2025-02-02T16:23:59.069371 - E:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
2025-02-02T16:23:59.109155 - Traceback (most recent call last):
File "E:\AI\ComfyUI\nodes.py", line 2110, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "E:\AI\ComfyUI\custom_nodes\comfyui_essentials\__init__.py", line 2, in <module>
from .image import IMAGE_CLASS_MAPPINGS, IMAGE_NAME_MAPPINGS
File "E:\AI\ComfyUI\custom_nodes\comfyui_essentials\image.py", line 8, in <module>
import kornia
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\__init__.py", line 8, in <module>
from . import augmentation, color, contrib, core, enhance, feature, io, losses, metrics, morphology, tracking, utils, x
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\augmentation\__init__.py", line 3, in <module>
from kornia.augmentation._2d import (
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\augmentation\_2d\__init__.py", line 2, in <module>
from kornia.augmentation._2d.intensity import *
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\augmentation\_2d\intensity\__init__.py", line 22, in <module>
from kornia.augmentation._2d.intensity.plasma import RandomPlasmaBrightness, RandomPlasmaContrast, RandomPlasmaShadow
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\augmentation\_2d\intensity\plasma.py", line 5, in <module>
from kornia.contrib import diamond_square
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\contrib\__init__.py", line 15, in <module>
from .image_stitching import ImageStitcher
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\contrib\image_stitching.py", line 7, in <module>
from kornia.feature import LocalFeatureMatcher, LoFTR
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\feature\__init__.py", line 6, in <module>
from .integrated import (
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\feature\integrated.py", line 16, in <module>
from .lightglue import LightGlue
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\feature\lightglue.py", line 30, in <module>
from flash_attn.modules.mha import FlashCrossAttention
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\flash_attn\__init__.py", line 3, in <module>
from flash_attn.flash_attn_interface import (
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\flash_attn\flash_attn_interface.py", line 15, in <module>
import flash_attn_2_cuda as flash_attn_gpu
ImportError: DLL load failed while importing flash_attn_2_cuda: The specified procedure could not be found.
2025-02-02T16:23:59.109155 - Cannot import E:\AI\ComfyUI\custom_nodes\comfyui_essentials module for custom nodes: DLL load failed while importing flash_attn_2_cuda: The specified procedure could not be found.
2025-02-02T16:23:59.134450 - Traceback (most recent call last):
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1778, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 45, in <module>
from ...modeling_flash_attention_utils import _flash_attention_forward
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_flash_attention_utils.py", line 27, in <module>
from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\flash_attn\__init__.py", line 3, in <module>
from flash_attn.flash_attn_interface import (
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\flash_attn\flash_attn_interface.py", line 15, in <module>
import flash_attn_2_cuda as flash_attn_gpu
ImportError: DLL load failed while importing flash_attn_2_cuda: The specified procedure could not be found.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\AI\ComfyUI\nodes.py", line 2110, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "E:\AI\ComfyUI\custom_nodes\comfyui_smznodes\__init__.py", line 61, in <module>
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "E:\AI\ComfyUI\custom_nodes\comfyui_smznodes\nodes.py", line 9, in <module>
from .smZNodes import HijackClip, HijackClipComfy, get_learned_conditioning
File "E:\AI\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 20, in <module>
from .modules.text_processing import prompt_parser
File "E:\AI\ComfyUI\custom_nodes\comfyui_smznodes\modules\text_processing\prompt_parser.py", line 5, in <module>
from compel import Compel
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\compel\__init__.py", line 1, in <module>
from .compel import *
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\compel\compel.py", line 6, in <module>
from transformers import CLIPTokenizer, CLIPTextModel
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1767, in __getattr__
value = getattr(module, name)
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1766, in __getattr__
module = self._get_module(self._class_to_module[name])
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1780, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):
DLL load failed while importing flash_attn_2_cuda: The specified procedure could not be found.
2025-02-02T16:23:59.134450 - Cannot import E:\AI\ComfyUI\custom_nodes\comfyui_smznodes module for custom nodes: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):
DLL load failed while importing flash_attn_2_cuda: The specified procedure could not be found.
2025-02-02T16:23:59.231484 - FETCH ComfyRegistry Data: 5/322025-02-02T16:23:59.231484 -
2025-02-02T16:23:59.719803 - �[34mWAS Node Suite: �[0mOpenCV Python FFMPEG support is enabled�[0m2025-02-02T16:23:59.720816 -
2025-02-02T16:23:59.720816 - �[34mWAS Node Suite �[93mWarning: �[0m`ffmpeg_bin_path` is not set in `E:\AI\ComfyUI\custom_nodes\pr-was-node-suite-comfyui-47064894\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.�[0m2025-02-02T16:23:59.720816 -
2025-02-02T16:24:00.288062 - �[34mWAS Node Suite: �[0mFinished.�[0m �[32mLoaded�[0m �[0m218�[0m �[32mnodes successfully.�[0m2025-02-02T16:24:00.288062 -
2025-02-02T16:24:00.288062 -
�[3m�[93m"The journey of a thousand miles begins with one step."�[0m�[3m - Lao Tzu�[0m
2025-02-02T16:24:00.288062 -
2025-02-02T16:24:00.312304 -
2025-02-02T16:24:00.312304 - �[92m[rgthree-comfy] Loaded 42 extraordinary nodes. 🎉�[00m2025-02-02T16:24:00.313318 -
2025-02-02T16:24:00.313318 -
2025-02-02T16:24:00.313318 -
Import times for custom nodes:
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\websocket_image_save.py
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\cg-use-everywhere
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui_ipadapter_plus
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui_ultimatesdupscale
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Fluxtapoz
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-jakeupgrade
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\rgthree-comfy
2025-02-02T16:24:00.314336 - 0.0 seconds (IMPORT FAILED): E:\AI\ComfyUI\custom_nodes\comfyui_essentials
2025-02-02T16:24:00.314336 - 0.0 seconds (IMPORT FAILED): E:\AI\ComfyUI\custom_nodes\comfyui_smznodes
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-KJNodes
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-02-02T16:24:00.314336 - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-02-02T16:24:00.314336 - 0.1 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-impact-pack
2025-02-02T16:24:00.314336 - 0.4 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-impact-subpack
2025-02-02T16:24:00.314336 - 0.5 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Manager
2025-02-02T16:24:00.315352 - 1.2 seconds: E:\AI\ComfyUI\custom_nodes\pr-was-node-suite-comfyui-47064894
2025-02-02T16:24:00.315352 - 10.1 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes
2025-02-02T16:24:00.315352 -
2025-02-02T16:24:00.315352 - WARNING: some comfy_extras/ nodes did not import correctly. This may be because they are missing some dependencies.
2025-02-02T16:24:00.315352 - IMPORT FAILED: nodes_canny.py
2025-02-02T16:24:00.315352 - IMPORT FAILED: nodes_morphology.py
2025-02-02T16:24:00.315352 -
This issue might be caused by new missing dependencies added the last time you updated ComfyUI.
2025-02-02T16:24:00.315352 - Please do a: pip install -r requirements.txt
2025-02-02T16:24:00.315352 -
2025-02-02T16:24:00.326556 - Starting server
2025-02-02T16:24:00.326556 - To see the GUI go to: http://127.0.0.1:8188
2025-02-02T16:24:07.892980 - E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/photoswipe-lightbox.esm.min.js2025-02-02T16:24:07.892980 -
2025-02-02T16:24:07.921797 - E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/photoswipe.min.css2025-02-02T16:24:07.921797 -
2025-02-02T16:24:07.930968 - FETCH DATA from: E:\AI\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2025-02-02T16:24:07.930968 - 2025-02-02T16:24:07.934038 - [DONE]2025-02-02T16:24:07.934038 -
2025-02-02T16:24:07.950291 - E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/pickr.min.js2025-02-02T16:24:07.950291 -
2025-02-02T16:24:07.995993 - E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/model-viewer.min.js2025-02-02T16:24:07.995993 -
2025-02-02T16:24:08.029833 - E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/classic.min.css2025-02-02T16:24:08.029833 -
2025-02-02T16:24:08.044017 - E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/juxtapose.min.js2025-02-02T16:24:08.044017 -
2025-02-02T16:24:08.065348 - E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/juxtapose.css2025-02-02T16:24:08.065348 -
2025-02-02T16:24:09.231108 - FETCH ComfyRegistry Data: 10/322025-02-02T16:24:09.231108 -
2025-02-02T16:24:19.788996 - FETCH ComfyRegistry Data: 15/322025-02-02T16:24:19.788996 -
2025-02-02T16:24:29.543707 - FETCH ComfyRegistry Data: 20/322025-02-02T16:24:29.543707 -
2025-02-02T16:24:39.922407 - FETCH ComfyRegistry Data: 25/322025-02-02T16:24:39.922407 -
2025-02-02T16:24:50.969933 - FETCH ComfyRegistry Data: 30/322025-02-02T16:24:50.969933 -
2025-02-02T16:24:55.523722 - FETCH ComfyRegistry Data [DONE]2025-02-02T16:24:55.523722 -
2025-02-02T16:24:55.556188 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-02-02T16:24:55.568610 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
2025-02-02T16:24:55.568610 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-02-02T16:24:55.568610 - 2025-02-02T16:24:56.999639 - [DONE]2025-02-02T16:24:56.999639 -
2025-02-02T16:25:42.637923 - got prompt
2025-02-02T16:25:44.157192 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\file_download.py:159: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\Alex\.cache\huggingface\hub\models--Salesforce--blip-image-captioning-base. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
2025-02-02T16:26:31.802681 - Loading CLIP model ViT-L-14/openai...2025-02-02T16:26:31.802681 -
2025-02-02T16:26:31.802681 - Loading pretrained ViT-L-14 from OpenAI.
2025-02-02T16:26:57.694974 -
100%|██████████████████████████████████████▊| 928M/933M [00:24<00:00, 51.1MiB/s]2025-02-02T16:26:57.763469 -
100%|███████████████████████████████████████| 933M/933M [00:24<00:00, 37.9MiB/s]2025-02-02T16:26:57.763469 -
2025-02-02T16:27:05.409981 -
ViT-L-14_openai_artists.safetensors: 100%|████████████████████████████████████████| 16.2M/16.2M [00:01<00:00, 12.0MB/s]2025-02-02T16:27:05.409981 -
ViT-L-14_openai_artists.safetensors: 100%|████████████████████████████████████████| 16.2M/16.2M [00:01<00:00, 8.71MB/s]2025-02-02T16:27:05.409981 -
2025-02-02T16:27:15.548955 -
ViT-L-14_openai_flavors.safetensors: 97%|████████████████████████████████████████▉ | 151M/155M [00:08<00:00, 64.1MB/s]2025-02-02T16:27:15.606041 -
ViT-L-14_openai_flavors.safetensors: 100%|██████████████████████████████████████████| 155M/155M [00:08<00:00, 18.9MB/s]2025-02-02T16:27:15.606041 -
2025-02-02T16:27:18.286768 -
ViT-L-14_openai_mediums.safetensors: 100%|███████████████████████████████████████████| 146k/146k [00:00<00:00, 228kB/s]2025-02-02T16:27:18.286768 -
ViT-L-14_openai_mediums.safetensors: 100%|███████████████████████████████████████████| 146k/146k [00:00<00:00, 228kB/s]2025-02-02T16:27:18.286768 -
2025-02-02T16:27:21.178007 -
ViT-L-14_openai_movements.safetensors: 100%|█████████████████████████████████████████| 307k/307k [00:00<00:00, 381kB/s]2025-02-02T16:27:21.179020 -
ViT-L-14_openai_movements.safetensors: 100%|█████████████████████████████████████████| 307k/307k [00:00<00:00, 380kB/s]2025-02-02T16:27:21.179020 -
2025-02-02T16:27:23.833165 -
ViT-L-14_openai_trendings.safetensors: 100%|█████████████████████████████████████████| 111k/111k [00:00<00:00, 199kB/s]2025-02-02T16:27:23.834203 -
ViT-L-14_openai_trendings.safetensors: 100%|█████████████████████████████████████████| 111k/111k [00:00<00:00, 198kB/s]2025-02-02T16:27:23.834203 -
2025-02-02T16:27:26.300238 -
ViT-L-14_openai_negative.safetensors: 100%|████████████████████████████████████████| 63.2k/63.2k [00:00<00:00, 179kB/s]2025-02-02T16:27:26.300742 -
ViT-L-14_openai_negative.safetensors: 100%|████████████████████████████████████████| 63.2k/63.2k [00:00<00:00, 178kB/s]2025-02-02T16:27:26.300742 -
2025-02-02T16:27:26.306813 - Loaded CLIP model and data in 54.50 seconds.2025-02-02T16:27:26.306813 -
2025-02-02T16:27:28.158664 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:200: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:27:28.210000 -
0%| | 0/55 [00:00<?, ?it/s]2025-02-02T16:27:28.218137 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:376: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast():
2025-02-02T16:27:28.313024 -
55%|████████████████████████████████████████████▏ | 30/55 [00:00<00:00, 291.19it/s]2025-02-02T16:27:28.382088 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 319.60it/s]2025-02-02T16:27:28.383098 -
2025-02-02T16:27:28.530793 - Prompt executed in 105.89 seconds
2025-02-02T16:27:44.486614 - got prompt
2025-02-02T16:27:44.921597 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:27:44.946170 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 447.65it/s]2025-02-02T16:27:44.946170 -
2025-02-02T16:27:45.060237 -
53%|██████████████████████████████████████████▌ | 52/99 [00:00<00:00, 519.36it/s]2025-02-02T16:27:45.140846 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 547.78it/s]2025-02-02T16:27:45.140846 -
2025-02-02T16:27:45.158134 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:280: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:27:46.030978 -
56%|█████████████████████████████████████████████▋ | 31/55 [00:00<00:00, 301.32it/s]2025-02-02T16:27:46.108046 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 305.64it/s]2025-02-02T16:27:46.109057 -
2025-02-02T16:27:46.258035 - Prompt executed in 1.77 seconds
2025-02-02T16:27:58.385726 - got prompt
2025-02-02T16:27:58.790862 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:27:58.817190 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 417.80it/s]2025-02-02T16:27:58.817190 -
2025-02-02T16:27:58.925026 -
46%|█████████████████████████████████████▋ | 46/99 [00:00<00:00, 452.22it/s]2025-02-02T16:27:59.008682 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 531.13it/s]2025-02-02T16:27:59.008682 -
2025-02-02T16:27:59.868823 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:27:59.889090 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 542.77it/s]2025-02-02T16:27:59.889090 -
2025-02-02T16:27:59.992603 -
68%|██████████████████████████████████████████████████████▊ | 67/99 [00:00<00:00, 666.85it/s]2025-02-02T16:28:00.037250 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 682.20it/s]2025-02-02T16:28:00.037250 -
2025-02-02T16:28:00.243953 - Prompt executed in 1.86 seconds
2025-02-02T16:28:17.668680 - got prompt
2025-02-02T16:28:18.071879 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:28:18.095196 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 471.75it/s]2025-02-02T16:28:18.095697 -
2025-02-02T16:28:18.199700 -
52%|█████████████████████████████████████████▋ | 51/99 [00:00<00:00, 507.74it/s]2025-02-02T16:28:18.271049 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 576.27it/s]2025-02-02T16:28:18.271049 -
2025-02-02T16:28:19.158629 -
56%|█████████████████████████████████████████████▋ | 31/55 [00:00<00:00, 305.79it/s]2025-02-02T16:28:19.230898 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 316.74it/s]2025-02-02T16:28:19.230898 -
2025-02-02T16:28:19.242244 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:271: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:28:19.251042 -
Flavor chain: 0%| | 0/32 [00:00<?, ?it/s]2025-02-02T16:28:19.376053 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:260: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:28:28.135304 -
Flavor chain: 41%|███████████████████████████▋ | 13/32 [00:08<00:14, 1.30it/s]2025-02-02T16:28:28.964283 -
Flavor chain: 41%|███████████████████████████▋ | 13/32 [00:09<00:14, 1.34it/s]2025-02-02T16:28:28.964283 -
2025-02-02T16:28:29.197784 -
95%|████████████████████████████████████████████████████████████████████████████▌ | 52/55 [00:00<00:00, 269.94it/s]2025-02-02T16:28:29.203904 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 262.94it/s]2025-02-02T16:28:29.203904 -
2025-02-02T16:28:29.240862 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:28:29.259115 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 602.62it/s]2025-02-02T16:28:29.259115 -
2025-02-02T16:28:29.362173 -
70%|████████████████████████████████████████████████████████▍ | 69/99 [00:00<00:00, 689.98it/s]2025-02-02T16:28:29.403835 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 703.86it/s]2025-02-02T16:28:29.403835 -
2025-02-02T16:28:29.630845 - !!! Exception during processing !!! 'charmap' codec can't encode characters in position 102-104: character maps to <undefined>
2025-02-02T16:28:29.664746 - Traceback (most recent call last):
File "E:\AI\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\AI\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\AI\ComfyUI\execution.py", line 168, in _map_node_over_list
process_inputs(input_data_all, 0, input_is_list=input_is_list)
File "E:\AI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\nodes\ChatGPT.py", line 696, in run
file.write(t)
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 102-104: character maps to <undefined>
2025-02-02T16:28:29.665770 - Prompt executed in 11.99 seconds
2025-02-02T16:28:52.009726 - got prompt
2025-02-02T16:28:52.337043 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:28:52.359415 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 491.68it/s]2025-02-02T16:28:52.359415 -
2025-02-02T16:28:52.464908 -
72%|██████████████████████████████████████████████████████████ | 71/99 [00:00<00:00, 700.02it/s]2025-02-02T16:28:52.514871 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 653.94it/s]2025-02-02T16:28:52.514871 -
2025-02-02T16:28:53.303507 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:28:53.324842 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 515.58it/s]2025-02-02T16:28:53.324842 -
2025-02-02T16:28:53.430066 -
65%|████████████████████████████████████████████████████▎ | 64/99 [00:00<00:00, 632.76it/s]2025-02-02T16:28:53.478798 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 660.54it/s]2025-02-02T16:28:53.478798 -
2025-02-02T16:28:53.690629 - Prompt executed in 1.68 seconds
2025-02-02T16:28:58.687390 - got prompt
2025-02-02T16:28:59.073990 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:28:59.099735 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 427.27it/s]2025-02-02T16:28:59.099735 -
2025-02-02T16:28:59.205051 -
60%|████████████████████████████████████████████████▎ | 59/99 [00:00<00:00, 588.69it/s]2025-02-02T16:28:59.265048 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 617.90it/s]2025-02-02T16:28:59.265048 -
2025-02-02T16:29:00.041885 -
56%|█████████████████████████████████████████████▋ | 31/55 [00:00<00:00, 300.57it/s]2025-02-02T16:29:00.105565 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 329.70it/s]2025-02-02T16:29:00.105565 -
2025-02-02T16:29:00.339933 - Prompt executed in 1.65 seconds
2025-02-02T16:29:32.033425 - got prompt
2025-02-02T16:29:32.513523 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:200: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:29:32.532974 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:376: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast():
2025-02-02T16:29:32.532974 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:29:32.559530 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 414.21it/s]2025-02-02T16:29:32.559530 -
2025-02-02T16:29:32.663366 -
59%|███████████████████████████████████████████████▍ | 58/99 [00:00<00:00, 575.64it/s]2025-02-02T16:29:32.727315 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 601.07it/s]2025-02-02T16:29:32.727315 -
2025-02-02T16:29:32.746656 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:280: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:29:33.468008 -
56%|█████████████████████████████████████████████▋ | 31/55 [00:00<00:00, 303.39it/s]2025-02-02T16:29:33.538052 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 319.35it/s]2025-02-02T16:29:33.538052 -
2025-02-02T16:29:33.745703 - Prompt executed in 1.71 seconds
2025-02-02T16:29:49.356374 - got prompt
2025-02-02T16:29:49.744382 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:29:49.765882 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 523.82it/s]2025-02-02T16:29:49.765882 -
2025-02-02T16:29:49.871598 -
74%|███████████████████████████████████████████████████████████▋ | 73/99 [00:00<00:00, 721.23it/s]2025-02-02T16:29:49.911599 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 701.05it/s]2025-02-02T16:29:49.912101 -
2025-02-02T16:29:50.530301 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:29:50.552802 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 499.99it/s]2025-02-02T16:29:50.552802 -
2025-02-02T16:29:50.657300 -
65%|████████████████████████████████████████████████████▎ | 64/99 [00:00<00:00, 633.67it/s]2025-02-02T16:29:50.714300 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 626.59it/s]2025-02-02T16:29:50.714300 -
2025-02-02T16:29:50.853300 - Prompt executed in 1.50 seconds
2025-02-02T16:30:11.846604 - got prompt
2025-02-02T16:30:12.267554 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:30:12.290554 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 478.27it/s]2025-02-02T16:30:12.290554 -
2025-02-02T16:30:12.395054 -
63%|██████████████████████████████████████████████████▋ | 62/99 [00:00<00:00, 616.92it/s]2025-02-02T16:30:12.458569 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 605.46it/s]2025-02-02T16:30:12.458569 -
2025-02-02T16:30:13.301613 -
58%|███████████████████████████████████████████████▏ | 32/55 [00:00<00:00, 315.27it/s]2025-02-02T16:30:13.378113 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 308.99it/s]2025-02-02T16:30:13.378113 -
2025-02-02T16:30:13.388616 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:271: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:30:13.399622 -
Flavor chain: 0%| | 0/32 [00:00<?, ?it/s]2025-02-02T16:30:13.480132 - C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\clip_interrogator\clip_interrogator.py:260: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.no_grad(), torch.cuda.amp.autocast():
2025-02-02T16:30:21.799170 -
Flavor chain: 44%|█████████████████████████████▊ | 14/32 [00:08<00:11, 1.56it/s]2025-02-02T16:30:22.473688 -
Flavor chain: 44%|█████████████████████████████▊ | 14/32 [00:09<00:11, 1.54it/s]2025-02-02T16:30:22.473688 -
2025-02-02T16:30:22.708243 -
80%|████████████████████████████████████████████████████████████████▊ | 44/55 [00:00<00:00, 234.95it/s]2025-02-02T16:30:22.734743 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 240.12it/s]2025-02-02T16:30:22.734743 -
2025-02-02T16:30:22.774741 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:30:22.797241 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 488.88it/s]2025-02-02T16:30:22.797241 -
2025-02-02T16:30:22.901741 -
64%|███████████████████████████████████████████████████▌ | 63/99 [00:00<00:00, 623.76it/s]2025-02-02T16:30:22.956741 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 634.61it/s]2025-02-02T16:30:22.956741 -
2025-02-02T16:30:23.234914 - Prompt executed in 11.39 seconds
2025-02-02T16:30:36.457586 - got prompt
2025-02-02T16:30:36.954910 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:30:36.991411 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 301.36it/s]2025-02-02T16:30:36.991411 -
2025-02-02T16:30:37.097647 -
61%|█████████████████████████████████████████████████ | 60/99 [00:00<00:00, 595.62it/s]2025-02-02T16:30:37.155648 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 625.65it/s]2025-02-02T16:30:37.155648 -
2025-02-02T16:30:37.859785 -
53%|██████████████████████████████████████████▋ | 29/55 [00:00<00:00, 288.56it/s]2025-02-02T16:30:37.957789 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 277.07it/s]2025-02-02T16:30:37.957789 -
2025-02-02T16:30:46.755725 -
Flavor chain: 41%|███████████████████████████▋ | 13/32 [00:08<00:14, 1.31it/s]2025-02-02T16:30:47.560621 -
Flavor chain: 41%|███████████████████████████▋ | 13/32 [00:09<00:14, 1.36it/s]2025-02-02T16:30:47.560621 -
2025-02-02T16:30:47.797394 -
89%|████████████████████████████████████████████████████████████████████████▏ | 49/55 [00:00<00:00, 259.80it/s]2025-02-02T16:30:47.810394 -
100%|█████████████████████████████████████████████████████████████████████████████████| 55/55 [00:00<00:00, 253.72it/s]2025-02-02T16:30:47.810394 -
2025-02-02T16:30:47.845893 -
0%| | 0/11 [00:00<?, ?it/s]2025-02-02T16:30:47.864393 -
100%|█████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 594.60it/s]2025-02-02T16:30:47.864894 -
2025-02-02T16:30:47.968894 -
74%|███████████████████████████████████████████████████████████▋ | 73/99 [00:00<00:00, 726.37it/s]2025-02-02T16:30:48.009394 -
100%|█████████████████████████████████████████████████████████████████████████████████| 99/99 [00:00<00:00, 702.13it/s]2025-02-02T16:30:48.009894 -
2025-02-02T16:30:48.222895 - !!! Exception during processing !!! 'charmap' codec can't encode characters in position 102-104: character maps to <undefined>
2025-02-02T16:30:48.223396 - Traceback (most recent call last):
File "E:\AI\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\AI\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\AI\ComfyUI\execution.py", line 168, in _map_node_over_list
process_inputs(input_data_all, 0, input_is_list=input_is_list)
File "E:\AI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\AI\ComfyUI\custom_nodes\comfyui-mixlab-nodes\nodes\ChatGPT.py", line 696, in run
file.write(t)
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 102-104: character maps to <undefined>
2025-02-02T16:30:48.224394 - Prompt executed in 11.76 seconds
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":3,"last_link_id":2,"nodes":[{"id":3,"type":"ShowTextForGPT","pos":[2761.382080078125,678.9935913085938],"size":[400,200],"flags":{},"order":2,"mode":0,"inputs":[{"name":"text","type":"STRING","link":2,"widget":{"name":"text"}},{"name":"output_dir","type":"STRING","link":null,"widget":{"name":"output_dir"},"shape":7}],"outputs":[{"name":"STRING","type":"STRING","links":null,"shape":6}],"properties":{"Node name for S&R":"ShowTextForGPT"},"widgets_values":["","","a girl sitting on the ground with a rose, detailed anime character art, cute casual streetwear, pink short hair, adidas painting, front back view and side view, with vivid blue eyes, very sleepy and shy, long bob hair, maze, from overlord, pvc poseable, 165 cm tall, in the anime film, sam"]},{"id":1,"type":"ClipInterrogator","pos":[2292.80615234375,656.4696044921875],"size":[327.5999755859375,244],"flags":{},"order":1,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":1}],"outputs":[{"name":"prompt","type":"STRING","links":[2],"shape":6,"slot_index":0},{"name":"random_samples","type":"STRING","links":null,"shape":6}],"properties":{"Node name for S&R":"ClipInterrogator"},"widgets_values":["best","on","a girl with pink hair and a white shirt sitting on a flower, character album cover, oversized hoodie, アニメ, very short hair, new song, (dark shorter curly hair), one ice cube, inspired by Emil Bisttram, promotional, a few roses, high detailes, moe, marble","[\n {\n \"medium_ranks\": {\n \"an album cover\": 0.2008056640625,\n \"a character portrait\": 0.1815185546875,\n \"concept art\": 0.1768798828125,\n \"poster art\": 0.173095703125,\n \"a picture\": 0.1697998046875\n },\n \"artist_ranks\": {\n \"by Nagasawa Rosetsu\": 0.234619140625,\n \"by Takeuchi Seihō\": 0.21533203125,\n \"by Tetsugoro Yorozu\": 0.2098388671875,\n \"by Kaburagi Kiyokata\": 0.2095947265625,\n \"by Riusuke Fukahori\": 0.206787109375\n },\n \"movement_ranks\": {\n \"rayonism\": 0.184326171875,\n \"sots art\": 0.182861328125,\n \"aestheticism\": 0.1827392578125,\n \"neo-romanticism\": 0.1795654296875,\n \"romanticism\": 0.1751708984375\n },\n \"trending_ranks\": {\n \"featured on pixiv\": 0.2135009765625,\n \"trending on pixiv\": 0.210205078125,\n \"pixiv contest winner\": 0.2034912109375,\n \"pixiv\": 0.197509765625,\n \"cgsociety\": 0.1881103515625\n },\n \"flavor_ranks\": {\n \"wearing a hoodie and flowers\": 0.26611328125,\n \"character album cover\": 0.262451171875,\n \"official artwork\": 0.260498046875,\n \"official art\": 0.25439453125,\n \"she is wearing streetwear\": 0.250732421875\n }\n }\n]","an album cover,by Takeuchi Seihō,rayonism,pixiv contest winner,official art\n\nan album cover,by Takeuchi Seihō,aestheticism,cgsociety,character album cover\n\nan album cover,by Riusuke Fukahori,romanticism,pixiv contest winner,official artwork\n\nposter art,by Takeuchi Seihō,rayonism,trending on pixiv,she is wearing streetwear\n\na picture,by Kaburagi Kiyokata,romanticism,pixiv contest winner,wearing a hoodie and flowers"]},{"id":2,"type":"LoadImage","pos":[1883.3680419921875,719.0912475585938],"size":[315,314],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[1],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["03.png","image"]}],"links":[[1,2,0,1,0,"IMAGE"],[2,1,0,3,0,"STRING"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.4122927695244523,"offset":[-2052.2626422811404,-350.21046837571237]},"node_versions":{"comfyui-mixlab-nodes":"d835aff0cb3e4def03e26e70a2a03368cae01693","comfy-core":"0.3.12"},"ue_links":[]},"version":0.4}
Additional Context
(Please add any additional context or steps to reproduce the error here)
The text was updated successfully, but these errors were encountered:
As I attempt to interrogate the following image, I get the
'charmap' codec can't encode characters in position 102-104: character maps to <undefined>
error.ComfyUI Error Report
Error Details
Stack Trace
System Information
Devices
Logs
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Additional Context
(Please add any additional context or steps to reproduce the error here)
The text was updated successfully, but these errors were encountered: