zyddnys / manga-image-translator Goto Github PK
View Code? Open in Web Editor NEWTranslate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/
Home Page: https://cotrans.touhou.ai/
License: GNU General Public License v3.0
Translate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/
Home Page: https://cotrans.touhou.ai/
License: GNU General Public License v3.0
i want to use the inpainter only for it to make all bubbles inside the manga i have blank so i can manually translate them
but the code is soo big and complex can you guide me or tell me hoe to use the inpainting only?
Thanks in advance.
可以考虑下添加批量翻译和东南亚语种
Hello there, I'd like to do some model compression such as quantization to make the model smaller and faster for CPU applications. Would you be able to release training script and dataset so I can contribute on that end. Thanks!
大佬好,我在Windows段执行命令行翻译整个文件夹,发现其中有段问题:
Inpainting resolution: 1360x1920
-- Translating
oh no.
fail to initialize deepl :
auth_key must not be empty
switch to google translator
-- Rendering translated text
似乎识别我所选择的翻译器为deepl了,实际上我所执行的为
python translate_demo.py --verbose --mode batch --use-inpainting --use-cuda --translator=baidu --target-lang=CHS --image D:/translate/1234
很显然,我所选择的是百度翻译,但是却切换为了谷歌翻译器了,该如何解决这个问题呢,谢谢!
Hi,
I can't create API with DeepL because they won't accept my credit card.
Sugoi Translator translation quality is going on par with deepL currently
I hope you can add Sugoi Translator Offline integration.
Thanks for your tool, great work!
I am trying to use this and keep getting this error: ModuleNotFoundError: No module named 'pydensecrf', I have tried installing cython and itself but nothing seems to work
Thanks for your great support.
Traceback (most recent call last):
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/translate_demo.py", line 647, in
main()
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/translate_demo.py", line 525, in main
boxes, scores = det({'shape':[(img_resized.shape[0], img_resized.shape[1])]}, db)
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/dbnet_utils.py", line 38, in call
boxes, scores = self.boxes_from_bitmap(pred[batch_index], segmentation[batch_index], width, height)
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/dbnet_utils.py", line 105, in boxes_from_bitmap
contours, _ = cv2.findContours((bitmap * 255).astype(np.uint8), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: too many values to unpack (expected 2)
I've meet the error.
Can you explain what i can do?
Thanks
You think this is a good Inpainting alternative to have https://github.com/msxie92/MangaInpainting ?
Based on the sample images it looks to do a tremendous job...
When i try to have something translated i get always get the error stated in the name of the issue. I simply took the commands from the usage section and adjusted the path for the translate_demo.py file and of course the images. I get the error when using batch mode or single image mode so it's not limited to just one mode. Here ist the full error message i get:
Traceback (most recent call last):
File "D:\manga-image-translator-main\translate_demo.py", line 334, in
loop.run_until_complete(main(args.mode))
File "C:\Users\Strah\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
return future.result()
File "D:\manga-image-translator-main\translate_demo.py", line 237, in main
text_render.prepare_renderer()
File "D:\manga-image-translator-main\text_rendering\text_render.py", line 481, in prepare_renderer
CACHED_FONT_FACE.append(freetype.Face(font_filename))
File "C:\Users\Strah\AppData\Local\Programs\Python\Python310\lib\site-packages\freetype_init.py", line 1101, in init
raise FT_Exception(error)
freetype.ft_errors.FT_Exception: FT_Exception: (cannot open resource)_
Giving error, here is the cmd log
\manga-image-translator>python translate_demo.py --mode web --use-inpainting --verbose --translator=google --target-lang=ENG
Namespace(mode='web', image='', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)
-- Loading models
-- Running in web service mode
-- Waiting for translation tasks
fail to initialize deepl :
auth_key must not be empty
switch to google translator
Serving up app on 127.0.0.1:5003
-- Processing task eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal
-- Detection resolution 1536
-- Detector using default
-- Render text direction is h
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to detection
-- Running text detection
Detection resolution: 1280x1536
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to ocr
-- Running OCR
0.8592585325241089 石伸,来自 fg: (38, 38, 41) bg: (38, 38, 41)
0.9998165369033813 莫非这留痕 fg: (46, 49, 55) bg: (45, 50, 55)
0.9551085233688354 远古的星空? fg: (66, 61, 68) bg: (67, 65, 69)
0.9972514510154724 邀游,可这黑色 fg: (68, 64, 73) bg: (68, 68, 76)
0.9320383667945862 么?即便是武帝 fg: (79, 73, 80) bg: (79, 73, 80)
0.9866024255752563 强者,也不敢说 fg: (67, 60, 69) bg: (66, 60, 69)
0.9399910569190979 石碑,竟然是来 fg: (63, 59, 66) bg: (63, 59, 66)
0.9985544681549072 宇宙星空中有什 fg: (78, 74, 81) bg: (77, 73, 78)
0.9888063073158264 自那么一个地方。 fg: (79, 72, 83) bg: (79, 72, 81)
-- spliting {0, 1, 2}
to split [0, 1, 2]
edge_weights [25.019992006393608, 23.0]
std: 1.0099960031968038, mean: 24.009996003196804
-- spliting {3, 4, 5, 6, 7, 8}
to split [3, 4, 5, 6, 7, 8]
edge_weights [26.1725046566048, 25.0, 24.0, 23.0, 22.0]
std: 1.463818630272171, mean: 24.03450093132096
region_indices [{0, 1, 2}, {3, 4, 5, 6, 7, 8}]
-- Generating text mask
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to mask_generation
100%|███████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 287.99it/s]
-- Translating
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to translating
translator google
target_language ENG
-- Running inpainting
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to inpainting
Inpainting resolution: 800x1136
_GatheringFuture exception was never retrieved
future: <_GatheringFuture finished exception=error("OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'cvtColor'\n> Overload resolution failed:\n> - src is not a numerical tuple\n> - Expected Ptr<cv::UMat> for argument 'src'\n")>
Traceback (most recent call last):
File "F:\xampp\htdocs\manga-rock\manga-image-translator\translate_demo.py", line 148, in infer
cv2.imwrite(f'result/{task_id}/inpainted.png', cv2.cvtColor(img_inpainted, cv2.COLOR_RGB2BGR))
cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'cvtColor'
> Overload resolution failed:
> - src is not a numerical tuple
> - Expected Ptr<cv::UMat> for argument 'src'
I changed the default on lama and downloaded his model but I got errors so I was wondering if it was implemented?
Traceback (most recent call last):
File "G:\GitHub\image-translator\manga-image-translator\translate_demo.py", line 15, in
from text_mask import dispatch as dispatch_mask_refinement
File "G:\GitHub\image-translator\manga-image-translator\text_mask_init_.py", line 8, in
from .text_mask_utils import complete_mask_fill, filter_masks, complete_mask
File "G:\GitHub\image-translator\manga-image-translator\text_mask\text_mask_utils.py", line 94, in
from pydensecrf.utils import compute_unary, unary_from_softmax
ModuleNotFoundError: No module named 'pydensecrf'
resolved
Trying this tool on ubuntu 20.04.
got the latest version from git
I`m translating from Japanese to English
Error:
switch to google translator
-- Rendering translated text
すごいな…
It's amazing ...
137 609 37 180
Traceback (most recent call last):
File "translate_demo.py", line 354, in <module>
loop.run_until_complete(main(args.mode))
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "translate_demo.py", line 270, in main
await infer(img, mode, '', alpha_ch = alpha_ch)
File "translate_demo.py", line 204, in infer
output = await dispatch_rendering(np.copy(img_inpainted), args.text_mag_ratio, translated_sentences, textlines, text_regions, render_text_direction_overwrite)
File "/home/maks/manga-image-translator/text_rendering/__init__.py", line 53, in dispatch
img_canvas = render(img_canvas, font_size, text_mag_ratio, trans_text, region, majority_dir, fg, bg, False)
File "/home/maks/manga-image-translator/text_rendering/__init__.py", line 83, in render
font_size_enlarged = findNextPowerOf2(font_size) * text_mag_ratio
File "/home/maks/manga-image-translator/utils.py", line 454, in findNextPowerOf2
n = n >> 1
TypeError: ufunc 'right_shift' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
方便的话可以加个微信吗alexcoop
是因为渲染的原因?
或许,彩色图像无法通过translate_demo.py文件显示吗?
在https://touhou.ai/imgtrans/ 上很顺利..
Single Mode work fine, but I get this error with batch mode after the update
F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --mode batch --image F:\lime_message\ --use-inpainting --verbose --translator=google --target-lang=ENG
Namespace(mode='batch', image='F:\\lime_message\\', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)
-- Loading models
Processing image in source directory
Processing F:\lime_message\desktop.ini -> F:\lime_message-translated\desktop.ini
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im28a.png -> F:\lime_message-translated\im28a.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im28b.png -> F:\lime_message-translated\im28b.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29a.png -> F:\lime_message-translated\im29a.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29b.png -> F:\lime_message-translated\im29b.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29c.png -> F:\lime_message-translated\im29c.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_01.png -> F:\lime_message-translated\lime02_2_01.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_02.png -> F:\lime_message-translated\lime02_2_02.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_03.png -> F:\lime_message-translated\lime02_2_03.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_01.png -> F:\lime_message-translated\lime02_3_01.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_02.png -> F:\lime_message-translated\lime02_3_02.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_03.png -> F:\lime_message-translated\lime02_3_03.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_04.png -> F:\lime_message-translated\lime02_3_04.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_05.png -> F:\lime_message-translated\lime02_3_05.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_06.png -> F:\lime_message-translated\lime02_3_06.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_07.png -> F:\lime_message-translated\lime02_3_07.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_08.png -> F:\lime_message-translated\lime02_3_08.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_09.png -> F:\lime_message-translated\lime02_3_09.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_10.png -> F:\lime_message-translated\lime02_3_10.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_11.png -> F:\lime_message-translated\lime02_3_11.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_12.png -> F:\lime_message-translated\lime02_3_12.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_13.png -> F:\lime_message-translated\lime02_3_13.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_14.png -> F:\lime_message-translated\lime02_3_14.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_15.png -> F:\lime_message-translated\lime02_3_15.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_16.png -> F:\lime_message-translated\lime02_3_16.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_17.png -> F:\lime_message-translated\lime02_3_17.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_18.png -> F:\lime_message-translated\lime02_3_18.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_19.png -> F:\lime_message-translated\lime02_3_19.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_20.png -> F:\lime_message-translated\lime02_3_20.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_01.png -> F:\lime_message-translated\lime02_5_01.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_02.png -> F:\lime_message-translated\lime02_5_02.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_03.png -> F:\lime_message-translated\lime02_5_03.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_04.png -> F:\lime_message-translated\lime02_5_04.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_05.png -> F:\lime_message-translated\lime02_5_05.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_06.png -> F:\lime_message-translated\lime02_5_06.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_07.png -> F:\lime_message-translated\lime02_5_07.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_08.png -> F:\lime_message-translated\lime02_5_08.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_6_01.png -> F:\lime_message-translated\lime02_6_01.png
Traceback (most recent call last):
File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
翻译效果非常好,想进一步聊聊,我的微信alexcoop
It will be helpful if you guys add your documentation in English too!
Hi, forgot about this project and making some pr of my own.
There is still blurring of translated text which messes up the quality... For some images that I tested, jpg artifacts were seen even as a png file.
I believe it's because the translated text is first drawn with limited font size (ex. 32px when it should start from 50px), then relocated by cv2.warpAffine. The warpAffine function reduces the quality by itself, but the size difference between the source and destination is another big factor of quality loss.
Why does the font size need to be restricted to power of 2? Does freetype package only work with fixed font size? Does the cache get too large without the restriction?
Meanwhile, my pr will be fixing the horizontal mode render, which is not going to use freetype, so this might not matter that much. But I was just curious.
使用的是谷歌翻译
命令行中可以看出已经截到待翻译的文本
已科学上网,可以ping通www.google.com和translate.google.com
错误如下,我想确认是不是还是墙的问题,因为后面看到说的是连接错误
File "translate_demo.py", line 317, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
File "translate_demo.py", line 160, in infer
translated_sentences = await run_translation(args.translator, 'auto', args.target_lang, [r.text for r in text_regions])
File "E:\Microsoft6477\manga-image-translator\translators_init_.py", line 176, in dispatch
result = await GOOGLE_CLIENT.translate(concat_texts, tgt_lang, src_lang, *args, **kwargs)
File "E:\Microsoft6477\manga-image-translator\translators\google.py", line 194, in translate
data, response = await self._translate(text, dest, src)
File "E:\Microsoft6477\manga-image-translator\translators\google.py", line 120, in _translate
r = await self.client.post(url, params=params, data=data)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1374, in post
return await self.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1147, in request
response = await self.send(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1168, in send
response = await self.send_handling_redirects(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1195, in send_handling_redirects
response = await self.send_handling_auth(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1232, in send_handling_auth
response = await self.send_single_request(request, timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1264, in send_single_request
) = await transport.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\http_proxy.py", line 110, in request
return await self._tunnel_request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\http_proxy.py", line 191, in _tunnel_request
proxy_response = await proxy_connection.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\connection.py", line 65, in request
self.socket = await self._open_socket(timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\connection.py", line 85, in _open_socket
return await self.backend.open_tcp_stream(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_backends\auto.py", line 38, in open_tcp_stream
return await self.backend.open_tcp_stream(hostname, port, ssl_context, timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_backends\asyncio.py", line 233, in open_tcp_stream
return SocketStream(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_exceptions.py", line 12, in map_exceptions
raise to_exc(exc) from None
httpcore._exceptions.ConnectError
I cloned the repo and install all the required module, but I get upload failed error with web mode
F:\Offline\Manga[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --mode web --use-inpainting --verbose --translator=google --target-lang=ENG
Namespace(mode='web', image='', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)
-- Loading models
-- Running in web service mode
-- Waiting for translation tasks
fail to initialize deepl :
auth_key must not be empty
switch to google translator
Serving up app on 127.0.0.1:5003
Also, when I try to run batch translation, I get access denied error like this
F:\Offline\Manga[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --image <F:\Offline\Manga\Sousaku Kanojo\lime message> --use-inpainting --verbose --translator=google --target-lang=ENG
Access is denied.
Anyone know the solution or maybe somethings is wrong with my command?
Is there an option for me to just add 5 or 10 to font size in code so that it's readable?
Example:
It decides that the font should be 32 so it's 42 instead.
I'm sorry, some fonts are not searched by name.I think the font name list is not enough.I'm sorry to bother you, but can you share the compressed font file?
I opened a new issue because I thought it was different from the existing dataset issue.
C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0>translate_demo.py --mode web --use-inpainting --use-cuda
Namespace(box_threshold=0.7, image='', inpainting_size=2048, mode='web', size=2048, text_threshold=0.5, unclip_ratio=2.2, use_cuda=True, use_inpainting=True)
-- Loading models
Traceback (most recent call last):
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 784, in
asyncio.run(main(args.mode))
File "C:\Users\Smile\AppData\Local\Programs\Python\Python38\lib\asyncio\runners.py", line 43, in run
return loop.run_until_complete(main)
File "C:\Users\Smile\AppData\Local\Programs\Python\Python38\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 749, in main
dictionary, model_ocr = load_ocr_model()
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 481, in load_ocr_model
model.load_state_dict(torch.load('ocr.ckpt', map_location='cpu'), strict=False)
File "C:\Users\Smile\AppData\Roaming\Python\Python38\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for OCR:
size mismatch for backbone.ConvNet.conv0_1.weight: copying a param with shape torch.Size([40, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3]).
size mismatch for backbone.ConvNet.bn0_1.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.running_mean: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.running_var: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.conv0_2.weight: copying a param with shape torch.Size([40, 40, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.conv1.weight: copying a param with shape torch.Size([80, 40, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.bn1.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.running_mean: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.running_var: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.downsample.0.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]).
size mismatch for backbone.ConvNet.layer1.0.downsample.1.weight: copying a param with shape torch.Size([80, 40, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.1.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.1.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.2.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.2.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer2.0.conv1.weight: copying a param with shape torch.Size([160, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer2.0.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.0.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.downsample.0.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).
size mismatch for backbone.ConvNet.layer2.0.downsample.1.weight: copying a param with shape torch.Size([160, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.1.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.1.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.2.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.2.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.3.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.3.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.4.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.4.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer3.0.conv1.weight: copying a param with shape torch.Size([320, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer3.0.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.0.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.downsample.0.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]).
size mismatch for backbone.ConvNet.layer3.0.downsample.1.weight: copying a param with shape torch.Size([320, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.1.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.1.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.2.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.2.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.3.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.3.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.4.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.4.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.5.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.5.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.6.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.6.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv3.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.bn3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.0.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.0.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.1.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.1.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.2.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.2.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.3.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.3.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.4.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.4.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv4_1.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 512, 2, 2]).
size mismatch for backbone.ConvNet.bn4_1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv4_2.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 512, 2, 2]).
size mismatch for backbone.ConvNet.bn4_2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for encoders.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for encoders.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for encoders.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for encoders.layers.0.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for encoders.layers.0.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for encoders.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for encoders.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for encoders.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for encoders.layers.1.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for encoders.layers.1.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.multihead_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.0.multihead_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.0.multihead_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.0.multihead_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for decoders.layers.0.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for decoders.layers.0.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.multihead_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.1.multihead_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.1.multihead_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.1.multihead_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for decoders.layers.1.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for decoders.layers.1.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for pe.pe: copying a param with shape torch.Size([768, 1, 320]) from checkpoint, the shape in current model is torch.Size([768, 1, 512]).
size mismatch for embd.weight: copying a param with shape torch.Size([19264, 320]) from checkpoint, the shape in current model is torch.Size([19264, 512]).
size mismatch for color_pred1.0.weight: copying a param with shape torch.Size([64, 320]) from checkpoint, the shape in current model is torch.Size([64, 512]).
hi, I want to translate an image (screenshot of a game) from Korean to Chinese. Here is the original image URL: https://imgur.com/vBAjVVF and the corresponding result image URL: https://imgur.com/kBipYEC. It seems that the Korean words are not well segmented at all, some words are not identified thus not translated. The CL arguments used: python translate_demo.py --verbose --translator=baidu --target-lang=CHS --image ./demo/test2.jpg.
If it's the bad argument's cause, I'd be very happy to know the good one, thanks!
beta 0.2 的时候和之前好像4g显存就能跑,beta 0.3就要6g才行
只想提取漫画中的日语对白到txt中,请问如何操作呢?
运行后显示:
usage: translate_demo.py [-h] [--mode MODE] [--image IMAGE]
[--image-dst IMAGE_DST] [--size SIZE]
[--use-inpainting] [--use-cuda] [--force-horizontal]
[--inpainting-size INPAINTING_SIZE]
[--unclip-ratio UNCLIP_RATIO]
[--box-threshold BOX_THRESHOLD]
[--text-threshold TEXT_THRESHOLD]
[--text-mag-ratio TEXT_MAG_RATIO]
[--translator TRANSLATOR] [--target-lang TARGET_LANG]
[--verbose]
translate_demo.py: error: unrecognized arguments: [--verbose] [--translator=google] [--target-lang=CHS]
Because it is awkward that reading text vertically, I modified text_render.py to always render text horizontally.
But since original text area is too narrow, this result is still hard to read.
So, I want to know it is possible to modify code to use more wider area while text rendering process.(and how to)
Sorry for my poor English and thanks in advance.
Trained model is producing incorrect color, will release model if I managed to fix this issue.
I've seen comictextdetector.pt is released in ONNX format.
Could it be possible to release other models (OCR, Detect, Inpainting) also in ONNX format?
Thanks
I'm not extremly tech savvy, so I don't know how to set it up, could you please maybe post a more descriptive description of how to do it?
I attach the two pictures that I used when this error occurred in url below.
http://comiciwate.jp/comic/images/profile_title.gif
http://comiciwate.jp/img/common/bnr_iwateken.jpg
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.