smikitky / chatgpt-md-translator Goto Github PK
View Code? Open in Web Editor NEWChatGPT Markdown Translator: CLI to translate Markdown docs using ChatGPT API
Home Page: https://www.npmjs.com/package/chatgpt-md-translator
License: MIT License
ChatGPT Markdown Translator: CLI to translate Markdown docs using ChatGPT API
Home Page: https://www.npmjs.com/package/chatgpt-md-translator
License: MIT License
Translating: /Users/user/Desktop/PRO/projects/docs/hashing.md
Model: gpt-3.5-turbo-instruct, Temperature: 0.1
⚡ [⚡, ❌ This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?]
Error This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?
at translateOne (file:///usr/local/lib/node_modules/chatgpt-md-translator/lib/translate.js:17:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Promise.all (index 1)
at async translateMultiple (file:///usr/local/lib/node_modules/chatgpt-md-translator/lib/translate.js:34:21)
at async main (file:///usr/local/lib/node_modules/chatgpt-md-translator/lib/index.js:51:28)
This tool replaces code blocks with placeholders before calling the API. But currently it does not understand indented code blocks within a list.
1. Install foobar.
````bash
$ npm install foobar
````
2. Run it.
React docs don't have code blocks like this, but some docs do use this pattern aggressively, so we must deal with this.
node-fetch
can support HTTP Proxy using this approach:
Thanks for your tool!
I am looking into using ChatGPT to translate the StarRocks docs and found your repo.
I am trying to use -f 1000
to limit the fragment size because I am seeing this message:
This model's maximum context length is 4097 tokens. However,
your messages resulted in 4330 tokens. Please reduce the length
of the messages.
This is the Markdown file that I am trying to translate:
https://raw.githubusercontent.com/StarRocks/starrocks/main/docs/loading/cloud_storage_load.md
This is my full commandline:
npx ts-node-esm index.ts -f 1000 \
../starrocks/docs/loading/cloud_storage_load.md
Thanks!
I tried to install chatgpt-md-translator
on my Windows box and later on on Ubuntu Linux box, both installations failed with more or less similar errors. That's what I'm getting when invoking the translator from my Ubuntu shell:
$ chatgpt-md-translator
node:internal/errors:496
ErrorCaptureStackTrace(err);
^
Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/usr/lib/node_modules/chatgpt-md-translator/lib/utils/error-utils' imported from /usr/lib/node_modules/chatgpt-md-translator/lib/utils/fs-utils.js
at new NodeError (node:internal/errors:405:5)
at finalizeResolution (node:internal/modules/esm/resolve:226:11)
at moduleResolve (node:internal/modules/esm/resolve:838:10)
at defaultResolve (node:internal/modules/esm/resolve:1036:11)
at DefaultModuleLoader.resolve (node:internal/modules/esm/loader:251:12)
at DefaultModuleLoader.getModuleJob (node:internal/modules/esm/loader:140:32)
at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:76:33)
at link (node:internal/modules/esm/module_job:75:36) {
code: 'ERR_MODULE_NOT_FOUND'
}
Node.js v20.5.1
Any help is appreciated!
The new GPT-4o model arrived, this should be reflected in this project and in the translator program itself:
README-md
4
should point to gpt-4o
rather than to gpt-4-turbo
This is due to the misuse of the ||
operator
chatgpt-md-translator/src/loadConfig.ts
Lines 72 to 94 in 5f72f98
Free trial users have a very strict API rate limit (3 requests per minute), making it impossible to just try this tool. We may add an option like --rpm=3
to reduce the API call rate.
The GPT-4 Turbo model, released in its preview version in November 2023, supports a massive context window, allowing for virtually unlimited text size for input during an API call. The length of the prompt file is no longer a concern. However, since the output size is still limited to 4,096 tokens, long articles will still need to be processed by dividing the original text into fragments.
That said, the unlimited nature of the input text means that, by processing the fragments sequentially and passing the already translated fragments to subsequent API calls, it might be possible to improve consistency in context and terminology in long text translations, without relying on prompt file tricks.
Although it's hard to imagine how effective this approach might be, it's worth adding a new flag to enable this behavior.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.