GithubHelp home page GithubHelp logo

zyddnys / manga-image-translator Goto Github PK

View Code? Open in Web Editor NEW
4.2K 4.2K 442.0 63.93 MB

Translate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/

Home Page: https://cotrans.touhou.ai/

License: GNU General Public License v3.0

Python 91.85% HTML 1.69% Jupyter Notebook 0.07% Dockerfile 0.05% Makefile 0.02% C++ 2.03% Cuda 4.28% Batchfile 0.01% Shell 0.01%
anime auto-translation chinese-translation deep-learning image-processing inpainting japanese-translations machine-translation manga neural-network ocr pytorch-implementation text-detection text-detection-recognition transformer

manga-image-translator's People

Contributors

1439707509 avatar archeb avatar bigemperor26 avatar boapps avatar dmmaze avatar dumoedss avatar earmour avatar eidenz avatar eltociear avatar jaric avatar jianchang512 avatar justfrederik avatar kagenihisomi avatar kdrkdrkdr avatar kolyada avatar lawbyte avatar my12123 avatar nikitalita avatar pidanshourouzhouxd avatar qiront avatar qwopqwop200 avatar rspreet92 avatar shirokurakana avatar singersbalm avatar thatdudo avatar thecolorman avatar tr7zw avatar vaibhavb02 avatar xazzafrazz avatar zyddnys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

manga-image-translator's Issues

how to use the inpainter?

i want to use the inpainter only for it to make all bubbles inside the manga i have blank so i can manually translate them
but the code is soo big and complex can you guide me or tell me hoe to use the inpainting only?

Thanks in advance.

training script and data for model compression

Hello there, I'd like to do some model compression such as quantization to make the model smaller and faster for CPU applications. Would you be able to release training script and dataset so I can contribute on that end. Thanks!

自动切换翻译器问题

大佬好,我在Windows段执行命令行翻译整个文件夹,发现其中有段问题:
Inpainting resolution: 1360x1920
-- Translating
oh no.
fail to initialize deepl :
auth_key must not be empty
switch to google translator
-- Rendering translated text
似乎识别我所选择的翻译器为deepl了,实际上我所执行的为
python translate_demo.py --verbose --mode batch --use-inpainting --use-cuda --translator=baidu --target-lang=CHS --image D:/translate/1234
很显然,我所选择的是百度翻译,但是却切换为了谷歌翻译器了,该如何解决这个问题呢,谢谢!

TypeError?

image

This problem solved with '--size 1024'.
The recognition rate is low. The same error occurs in some other images.
Is there a way not to get this error without "--size 1024" option?
I attached the used image.

proseka

Add sugoi translator offline support please!

Hi,

I can't create API with DeepL because they won't accept my credit card.
Sugoi Translator translation quality is going on par with deepL currently
I hope you can add Sugoi Translator Offline integration.

Thanks for your tool, great work!

Issue running, 'pydensecrf'

I am trying to use this and keep getting this error: ModuleNotFoundError: No module named 'pydensecrf', I have tried installing cython and itself but nothing seems to work

请教关于尺码图的优化建议

你好。

尝试翻译几张尺码图(简转繁),但效果仍不够好(特别是表格内的文字容易跑版)。请问有什么建议吗?

如果需要是加强模型的辨识能力,可否提供文件指引,我可以收集训练集及训练模型在反馈给这个专案。

谢谢

原图:

处理过后的:

Error when running

Thanks for your great support.

Traceback (most recent call last):
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/translate_demo.py", line 647, in
main()
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/translate_demo.py", line 525, in main
boxes, scores = det({'shape':[(img_resized.shape[0], img_resized.shape[1])]}, db)
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/dbnet_utils.py", line 38, in call
boxes, scores = self.boxes_from_bitmap(pred[batch_index], segmentation[batch_index], width, height)
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/dbnet_utils.py", line 105, in boxes_from_bitmap
contours, _ = cv2.findContours((bitmap * 255).astype(np.uint8), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: too many values to unpack (expected 2)

I've meet the error.

Can you explain what i can do?

Thanks

Wrong clean region

First, thanks for the english optimizations, I hope this tool continue becaming better.
Anyway, I found that the tool tried to redraw the wrong region to clean the text,
image
This happened to me when I used --size 3000

freetype.ft_errors.FT_Exception: FT_Exception: (cannot open resource)

When i try to have something translated i get always get the error stated in the name of the issue. I simply took the commands from the usage section and adjusted the path for the translate_demo.py file and of course the images. I get the error when using batch mode or single image mode so it's not limited to just one mode. Here ist the full error message i get:
Traceback (most recent call last):
File "D:\manga-image-translator-main\translate_demo.py", line 334, in
loop.run_until_complete(main(args.mode))
File "C:\Users\Strah\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
return future.result()
File "D:\manga-image-translator-main\translate_demo.py", line 237, in main
text_render.prepare_renderer()
File "D:\manga-image-translator-main\text_rendering\text_render.py", line 481, in prepare_renderer
CACHED_FONT_FACE.append(freetype.Face(font_filename))
File "C:\Users\Strah\AppData\Local\Programs\Python\Python310\lib\site-packages\freetype_init
.py", line 1101, in init
raise FT_Exception(error)
freetype.ft_errors.FT_Exception: FT_Exception: (cannot open resource)_

src is not a numerical tuple

Giving error, here is the cmd log

\manga-image-translator>python translate_demo.py --mode web --use-inpainting --verbose --translator=google --target-lang=ENG
Namespace(mode='web', image='', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)
 -- Loading models
 -- Running in web service mode
 -- Waiting for translation tasks
fail to initialize deepl :
auth_key must not be empty
switch to google translator
Serving up app on 127.0.0.1:5003
 -- Processing task eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal
 -- Detection resolution 1536
 -- Detector using default
 -- Render text direction is h
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to detection
 -- Running text detection
Detection resolution: 1280x1536
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to ocr
 -- Running OCR
0.8592585325241089 石伸,来自 fg: (38, 38, 41) bg: (38, 38, 41)
0.9998165369033813 莫非这留痕 fg: (46, 49, 55) bg: (45, 50, 55)
0.9551085233688354 远古的星空? fg: (66, 61, 68) bg: (67, 65, 69)
0.9972514510154724 邀游,可这黑色 fg: (68, 64, 73) bg: (68, 68, 76)
0.9320383667945862 么?即便是武帝 fg: (79, 73, 80) bg: (79, 73, 80)
0.9866024255752563 强者,也不敢说 fg: (67, 60, 69) bg: (66, 60, 69)
0.9399910569190979 石碑,竟然是来 fg: (63, 59, 66) bg: (63, 59, 66)
0.9985544681549072 宇宙星空中有什 fg: (78, 74, 81) bg: (77, 73, 78)
0.9888063073158264 自那么一个地方。 fg: (79, 72, 83) bg: (79, 72, 81)
 -- spliting {0, 1, 2}
to split [0, 1, 2]
edge_weights [25.019992006393608, 23.0]
std: 1.0099960031968038, mean: 24.009996003196804
 -- spliting {3, 4, 5, 6, 7, 8}
to split [3, 4, 5, 6, 7, 8]
edge_weights [26.1725046566048, 25.0, 24.0, 23.0, 22.0]
std: 1.463818630272171, mean: 24.03450093132096
region_indices [{0, 1, 2}, {3, 4, 5, 6, 7, 8}]
 -- Generating text mask
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to mask_generation
100%|███████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 287.99it/s]
 -- Translating
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to translating
translator google
target_language ENG
 -- Running inpainting
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to inpainting
Inpainting resolution: 800x1136
_GatheringFuture exception was never retrieved
future: <_GatheringFuture finished exception=error("OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'cvtColor'\n> Overload resolution failed:\n>  - src is not a numerical tuple\n>  - Expected Ptr<cv::UMat> for argument 'src'\n")>
Traceback (most recent call last):
  File "F:\xampp\htdocs\manga-rock\manga-image-translator\translate_demo.py", line 148, in infer
    cv2.imwrite(f'result/{task_id}/inpainted.png', cv2.cvtColor(img_inpainted, cv2.COLOR_RGB2BGR))
cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'cvtColor'
> Overload resolution failed:
>  - src is not a numerical tuple
>  - Expected Ptr<cv::UMat> for argument 'src'


怎么回事

Traceback (most recent call last):
File "G:\GitHub\image-translator\manga-image-translator\translate_demo.py", line 15, in
from text_mask import dispatch as dispatch_mask_refinement
File "G:\GitHub\image-translator\manga-image-translator\text_mask_init_.py", line 8, in
from .text_mask_utils import complete_mask_fill, filter_masks, complete_mask
File "G:\GitHub\image-translator\manga-image-translator\text_mask\text_mask_utils.py", line 94, in
from pydensecrf.utils import compute_unary, unary_from_softmax
ModuleNotFoundError: No module named 'pydensecrf'

ufunc 'right_shift' not supported for the input types

Trying this tool on ubuntu 20.04.
got the latest version from git
I`m translating from Japanese to English

Error:


switch to google translator
 -- Rendering translated text
すごいな…
It's amazing ...
137 609 37 180
Traceback (most recent call last):
  File "translate_demo.py", line 354, in <module>
    loop.run_until_complete(main(args.mode))
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "translate_demo.py", line 270, in main
    await infer(img, mode, '', alpha_ch = alpha_ch)
  File "translate_demo.py", line 204, in infer
    output = await dispatch_rendering(np.copy(img_inpainted), args.text_mag_ratio, translated_sentences, textlines, text_regions, render_text_direction_overwrite)
  File "/home/maks/manga-image-translator/text_rendering/__init__.py", line 53, in dispatch
    img_canvas = render(img_canvas, font_size, text_mag_ratio, trans_text, region, majority_dir, fg, bg, False)
  File "/home/maks/manga-image-translator/text_rendering/__init__.py", line 83, in render
    font_size_enlarged = findNextPowerOf2(font_size) * text_mag_ratio
  File "/home/maks/manga-image-translator/utils.py", line 454, in findNextPowerOf2
    n = n >> 1
TypeError: ufunc 'right_shift' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

Batch Mode Error after latest update

Single Mode work fine, but I get this error with batch mode after the update

F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --mode batch --image F:\lime_message\ --use-inpainting --verbose --translator=google --target-lang=ENG

Namespace(mode='batch', image='F:\\lime_message\\', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)

 -- Loading models
Processing image in source directory
Processing F:\lime_message\desktop.ini -> F:\lime_message-translated\desktop.ini

Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im28a.png -> F:\lime_message-translated\im28a.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im28b.png -> F:\lime_message-translated\im28b.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29a.png -> F:\lime_message-translated\im29a.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29b.png -> F:\lime_message-translated\im29b.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29c.png -> F:\lime_message-translated\im29c.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_01.png -> F:\lime_message-translated\lime02_2_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_02.png -> F:\lime_message-translated\lime02_2_02.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_03.png -> F:\lime_message-translated\lime02_2_03.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_01.png -> F:\lime_message-translated\lime02_3_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_02.png -> F:\lime_message-translated\lime02_3_02.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_03.png -> F:\lime_message-translated\lime02_3_03.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_04.png -> F:\lime_message-translated\lime02_3_04.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_05.png -> F:\lime_message-translated\lime02_3_05.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_06.png -> F:\lime_message-translated\lime02_3_06.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_07.png -> F:\lime_message-translated\lime02_3_07.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_08.png -> F:\lime_message-translated\lime02_3_08.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_09.png -> F:\lime_message-translated\lime02_3_09.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_10.png -> F:\lime_message-translated\lime02_3_10.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_11.png -> F:\lime_message-translated\lime02_3_11.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_12.png -> F:\lime_message-translated\lime02_3_12.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_13.png -> F:\lime_message-translated\lime02_3_13.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_14.png -> F:\lime_message-translated\lime02_3_14.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_15.png -> F:\lime_message-translated\lime02_3_15.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_16.png -> F:\lime_message-translated\lime02_3_16.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_17.png -> F:\lime_message-translated\lime02_3_17.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_18.png -> F:\lime_message-translated\lime02_3_18.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_19.png -> F:\lime_message-translated\lime02_3_19.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_20.png -> F:\lime_message-translated\lime02_3_20.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_01.png -> F:\lime_message-translated\lime02_5_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_02.png -> F:\lime_message-translated\lime02_5_02.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_03.png -> F:\lime_message-translated\lime02_5_03.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_04.png -> F:\lime_message-translated\lime02_5_04.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_05.png -> F:\lime_message-translated\lime02_5_05.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_06.png -> F:\lime_message-translated\lime02_5_06.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_07.png -> F:\lime_message-translated\lime02_5_07.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_08.png -> F:\lime_message-translated\lime02_5_08.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_6_01.png -> F:\lime_message-translated\lime02_6_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main

English

It will be helpful if you guys add your documentation in English too!

Is there any reason why the font size should be restricted to power of 2?

Hi, forgot about this project and making some pr of my own.
There is still blurring of translated text which messes up the quality... For some images that I tested, jpg artifacts were seen even as a png file.
I believe it's because the translated text is first drawn with limited font size (ex. 32px when it should start from 50px), then relocated by cv2.warpAffine. The warpAffine function reduces the quality by itself, but the size difference between the source and destination is another big factor of quality loss.
Why does the font size need to be restricted to power of 2? Does freetype package only work with fixed font size? Does the cache get too large without the restriction?

Meanwhile, my pr will be fixing the horizontal mode render, which is not going to use freetype, so this might not matter that much. But I was just curious.

本地使用报错,想确认是不是墙的问题

使用的是谷歌翻译
命令行中可以看出已经截到待翻译的文本
已科学上网,可以ping通www.google.com和translate.google.com

错误如下,我想确认是不是还是墙的问题,因为后面看到说的是连接错误

File "translate_demo.py", line 317, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
File "translate_demo.py", line 160, in infer
translated_sentences = await run_translation(args.translator, 'auto', args.target_lang, [r.text for r in text_regions])
File "E:\Microsoft6477\manga-image-translator\translators_init_.py", line 176, in dispatch
result = await GOOGLE_CLIENT.translate(concat_texts, tgt_lang, src_lang, *args, **kwargs)
File "E:\Microsoft6477\manga-image-translator\translators\google.py", line 194, in translate
data, response = await self._translate(text, dest, src)
File "E:\Microsoft6477\manga-image-translator\translators\google.py", line 120, in _translate
r = await self.client.post(url, params=params, data=data)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1374, in post
return await self.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1147, in request
response = await self.send(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1168, in send
response = await self.send_handling_redirects(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1195, in send_handling_redirects
response = await self.send_handling_auth(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1232, in send_handling_auth
response = await self.send_single_request(request, timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1264, in send_single_request
) = await transport.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\http_proxy.py", line 110, in request
return await self._tunnel_request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\http_proxy.py", line 191, in _tunnel_request
proxy_response = await proxy_connection.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\connection.py", line 65, in request
self.socket = await self._open_socket(timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\connection.py", line 85, in _open_socket
return await self.backend.open_tcp_stream(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_backends\auto.py", line 38, in open_tcp_stream
return await self.backend.open_tcp_stream(hostname, port, ssl_context, timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_backends\asyncio.py", line 233, in open_tcp_stream
return SocketStream(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_exceptions.py", line 12, in map_exceptions
raise to_exc(exc) from None
httpcore._exceptions.ConnectError

Upload Failed & Access is denies error

I cloned the repo and install all the required module, but I get upload failed error with web mode

F:\Offline\Manga[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --mode web --use-inpainting --verbose --translator=google --target-lang=ENG
Namespace(mode='web', image='', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)
-- Loading models
-- Running in web service mode
-- Waiting for translation tasks
fail to initialize deepl :
auth_key must not be empty
switch to google translator
Serving up app on 127.0.0.1:5003

image

Also, when I try to run batch translation, I get access denied error like this
F:\Offline\Manga[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --image <F:\Offline\Manga\Sousaku Kanojo\lime message> --use-inpainting --verbose --translator=google --target-lang=ENG
Access is denied.

Anyone know the solution or maybe somethings is wrong with my command?

Font is too small everywhere

Is there an option for me to just add 5 or 10 to font size in code so that it's readable?
Example:
It decides that the font should be 32 so it's 42 instead.

Can you share the compressed font files you have gathered?

I'm sorry, some fonts are not searched by name.I think the font name list is not enough.I'm sorry to bother you, but can you share the compressed font file?
I opened a new issue because I thought it was different from the existing dataset issue.

2.0报错

C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0>translate_demo.py --mode web --use-inpainting --use-cuda
Namespace(box_threshold=0.7, image='', inpainting_size=2048, mode='web', size=2048, text_threshold=0.5, unclip_ratio=2.2, use_cuda=True, use_inpainting=True)
-- Loading models
Traceback (most recent call last):
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 784, in
asyncio.run(main(args.mode))
File "C:\Users\Smile\AppData\Local\Programs\Python\Python38\lib\asyncio\runners.py", line 43, in run
return loop.run_until_complete(main)
File "C:\Users\Smile\AppData\Local\Programs\Python\Python38\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 749, in main
dictionary, model_ocr = load_ocr_model()
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 481, in load_ocr_model
model.load_state_dict(torch.load('ocr.ckpt', map_location='cpu'), strict=False)
File "C:\Users\Smile\AppData\Roaming\Python\Python38\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for OCR:
size mismatch for backbone.ConvNet.conv0_1.weight: copying a param with shape torch.Size([40, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3]).
size mismatch for backbone.ConvNet.bn0_1.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.running_mean: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.running_var: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.conv0_2.weight: copying a param with shape torch.Size([40, 40, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.conv1.weight: copying a param with shape torch.Size([80, 40, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.bn1.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.running_mean: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.running_var: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.downsample.0.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]).
size mismatch for backbone.ConvNet.layer1.0.downsample.1.weight: copying a param with shape torch.Size([80, 40, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.1.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.1.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.2.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.2.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer2.0.conv1.weight: copying a param with shape torch.Size([160, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer2.0.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.0.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.downsample.0.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).
size mismatch for backbone.ConvNet.layer2.0.downsample.1.weight: copying a param with shape torch.Size([160, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.1.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.1.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.2.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.2.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.3.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.3.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.4.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.4.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer3.0.conv1.weight: copying a param with shape torch.Size([320, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer3.0.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.0.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.downsample.0.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]).
size mismatch for backbone.ConvNet.layer3.0.downsample.1.weight: copying a param with shape torch.Size([320, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.1.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.1.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.2.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.2.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.3.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.3.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.4.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.4.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.5.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.5.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.6.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.6.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv3.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.bn3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.0.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.0.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.1.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.1.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.2.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.2.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.3.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.3.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.4.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.4.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv4_1.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 512, 2, 2]).
size mismatch for backbone.ConvNet.bn4_1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv4_2.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 512, 2, 2]).
size mismatch for backbone.ConvNet.bn4_2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for encoders.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for encoders.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for encoders.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for encoders.layers.0.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for encoders.layers.0.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for encoders.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for encoders.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for encoders.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for encoders.layers.1.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for encoders.layers.1.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.multihead_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.0.multihead_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.0.multihead_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.0.multihead_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for decoders.layers.0.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for decoders.layers.0.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.multihead_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.1.multihead_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.1.multihead_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.1.multihead_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for decoders.layers.1.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for decoders.layers.1.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for pe.pe: copying a param with shape torch.Size([768, 1, 320]) from checkpoint, the shape in current model is torch.Size([768, 1, 512]).
size mismatch for embd.weight: copying a param with shape torch.Size([19264, 320]) from checkpoint, the shape in current model is torch.Size([19264, 512]).
size mismatch for color_pred1.0.weight: copying a param with shape torch.Size([64, 320]) from checkpoint, the shape in current model is torch.Size([64, 512]).

Model for English OCR

Can you provide a model to ocr english pages?
I had good results with japanese pages, but when I tried a english page...

005

Original:
005

Or at least add example how we can build our own model... But I will supose I don't will know how to handle even if I have a documentation (i'm not good with python)

Result not very satisfying

hi, I want to translate an image (screenshot of a game) from Korean to Chinese. Here is the original image URL: https://imgur.com/vBAjVVF and the corresponding result image URL: https://imgur.com/kBipYEC. It seems that the Korean words are not well segmented at all, some words are not identified thus not translated. The CL arguments used: python translate_demo.py --verbose --translator=baidu --target-lang=CHS --image ./demo/test2.jpg.

If it's the bad argument's cause, I'd be very happy to know the good one, thanks!

本地运行出错

运行后显示:

usage: translate_demo.py [-h] [--mode MODE] [--image IMAGE]
[--image-dst IMAGE_DST] [--size SIZE]
[--use-inpainting] [--use-cuda] [--force-horizontal]
[--inpainting-size INPAINTING_SIZE]
[--unclip-ratio UNCLIP_RATIO]
[--box-threshold BOX_THRESHOLD]
[--text-threshold TEXT_THRESHOLD]
[--text-mag-ratio TEXT_MAG_RATIO]
[--translator TRANSLATOR] [--target-lang TARGET_LANG]
[--verbose]
translate_demo.py: error: unrecognized arguments: [--verbose] [--translator=google] [--target-lang=CHS]

Can I use wider area while text rendering process?

Because it is awkward that reading text vertically, I modified text_render.py to always render text horizontally.
But since original text area is too narrow, this result is still hard to read.
So, I want to know it is possible to modify code to use more wider area while text rendering process.(and how to)

Sorry for my poor English and thanks in advance.

DeepL translation error

Hello,

Translating using DeepL only results in "error" as the text results (see attachment).

I'm running on the free version of DeepL, but the python library works when I try running it by hand.
e93f70744e935153116c82b0c1e662f5M

LaMa inpainting

Trained model is producing incorrect color, will release model if I managed to fix this issue.

ONNX models

I've seen comictextdetector.pt is released in ONNX format.

Could it be possible to release other models (OCR, Detect, Inpainting) also in ONNX format?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.