GithubHelp home page GithubHelp logo

s-nagaev / hiroshi Goto Github PK

View Code? Open in Web Editor NEW
14.0 1.0 9.0 560 KB

Just a GPT4Free-based Telegram bot providing 100% FREE GPT experience.

License: MIT License

Dockerfile 1.82% Python 98.18%
gpt-35 gpt-4 gpt4free llama2 ready-to-go self-hosted telegram-bot

hiroshi's Introduction

logo

Build docker image arch docker image size license

Hiroshi is a GPT4Free-based Telegram chatbot that offers 100% free access to interact with GPT-3.5, GPT-4, and Llama2 language models, inclusive of Bing, You, AI Chat, and more. Users have the freedom to select their preferred model or specific provider. Do note, the speed/stability may be slightly diminished when working with certain providers. Conversation context is fully retained when you switch between models and providers.

Note: This bot provides access to public free services. The quality and speed of such services can vary depending on various conditions and their current load. If you need a bot that uses the official OpenAI API and you have an API KEY, please check the following repository: https://github.com/s-nagaev/chibi.

Supported platforms

  • linux/amd64
  • linux/arm64 (Raspberry Pi 4+ is supported!)

Features

  • Access to language models such as GPT-3.5, GPT-4, Llama2.
  • Ability to choose a specific model or provider, with the fastest and most available provider automatically selected for the chosen model.
  • Customizable storage time for conversation history.
  • Preservation of conversation history even when changing providers or models.
  • Pre-configured for quick setup, requiring only a Telegram bot token to get started.
  • Cross-platform support (amd64, arm64).
  • MIT License.

Can I try it?

Sure! @hiroshi_gpt_bot

System Requirements

The application is not resource-demanding at all. It works perfectly on the minimal Raspberry Pi 4 and the cheapest AWS EC2 Instance t4g.nano (2 arm64 cores, 512MB RAM), while being able to serve many people simultaneously.

Prerequisites

  • Docker
  • Docker Compose (optional)

Getting Started

Using Docker Run

  1. Pull the Hiroshi Docker image:

    docker pull pysergio/hiroshi:latest
  2. Run the Docker container with the necessary environment variables:

    docker run -d \
      -e TELEGRAM_BOT_TOKEN=<your_telegram_token> \
      -v <path_to_local_data_directory>:/app/data \
      --name hiroshi \
      pysergio/hiroshi:latest

    Replace <your_telegram_token> and <path_to_local_data_directory> with appropriate values.

Using Docker Compose

  1. Create a docker-compose.yml file with the following contents:

       version: '3'
    
       services:
         hiroshi:
          restart: unless-stopped
          image: pysergio/hiroshi:latest
          environment:
            TELEGRAM_BOT_TOKEN: <your_telegram_token>
          volumes:
            - hiroshi_data:/app/data
       
       volumes:
         hiroshi_data:

    Replace <your_telegram_token> with appropriate values.

  2. Run the Docker container:

    docker-compose up -d

Please, visit the examples directory of the current repository for more examples.

Configuration

You can configure Hiroshi using the following environment variables:

Variable Description Required Default Value
TELEGRAM_BOT_TOKEN Your Telegram bot token Yes
ALLOW_BOTS Allow other bots to interact with Hiroshi No false
ANSWER_DIRECT_MESSAGES_ONLY If True the bot in group chats will respond only to messages, containing its name (see the BOT_NAME setting) No true
ASSISTANT_PROMPT Initial assistant prompt for OpenAI Client No "You're helpful and friendly assistant. Your name is Hiroshi"
BOT_NAME Name of the bot No "Hiroshi"
GROUP_ADMINS Comma-separated list of usernames, i.e. "@YourName,@YourFriendName,@YourCatName", that should have exclusive permissions to set provider and clear dialog history in group chats No
GROUPS_WHITELIST Comma-separated list of whitelisted group IDs, i.e "-799999999,-788888888" No
LOG_PROMPT_DATA Log user's prompts and GPT answers for debugging purposes. No false
MAX_CONVERSATION_AGE_MINUTES Maximum age of conversations (in minutes) No 60
MAX_HISTORY_TOKENS Maximum number of tokens in conversation history No 1800
MESSAGE_FOR_DISALLOWED_USERS Message to show disallowed users No "You're not allowed to interact with me, sorry. Contact my owner first, please."
PROXY Proxy settings for your application No
REDIS Redis connection string, i.e. "redis://localhost" No
REDIS_PASSWORD Redis password (optional) No
RETRIES The number of retry requests to the provider in case of a failed response No 2
SHOW_ABOUT Just set it to false, if for some reason you want to hide the /about command No true
TIMEOUT Timeout (in seconds) for processing requests No 60
USERS_WHITELIST Comma-separated list of whitelisted usernames, i.e. "@YourName,@YourFriendName,@YourCatName" No
MONITORING_URL Activates monitoring functionality and sends GET request to this url every MONITORING_FREQUENCY_CALL seconds. No
MONITORING_FREQUENCY_CALL If monitoring functionality is active sends GET request to MONITORING_URL every MONITORING_FREQUENCY_CALL seconds. No 300
MONITORING_RETRY_CALLS Logs error response only after MONITORING_RETRY_CALLS tries. No 3
MONITORING_PROXY Monitoring proxy url. No

Please, visit the examples directory for the example of .env-file.

Versioning

We use SemVer for versioning. For the versions available, see the tags on this repository.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

hiroshi's People

Contributors

ksenyachertenko avatar s-nagaev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

hiroshi's Issues

[Feature Request] Append reply message to input

Description

When a user replies to other user message with @BotName , append the reply message to bot input.

Example:

User1 says:

eget nunc scelerisque viverra mauris in aliquam sem fringilla ut morbi tincidunt augue interdum velit

User2 replies to User1 text with: @BotName aliquet eget sit amet

Note: Lorem Ipsum used as example.

Current behavior

Bot only replies to User2 text without previous text replied message.

Proxy url

In what format should I add an url proxy?
And where i can find .env example file?

Error after install in docker

Message in Telegram
"I'm sorry, but there seems to be a little hiccup with your request at the moment 😥 Would you mind trying again later? Don't worry, I'll be here to assist you whenever you're ready! 😼"

Log:
PermissionError: [Errno 13] Permission denied: '/app/data/########

Some Provider got error How to resolve?

Here is error
got an error: 403, message='Forbidden', url=URL('https://you.com/api/streamingSearch?userFiles=&selectedChatMode=default')

Added proxy still no luck

UNHEALTH PORT CHECKING

Why does the port detection fail when configuring KOYEB, and the results show that it is unhealthy and cannot run. Which port do I need to EXPOSE? How to set it up?

Implementing an optional Integration with Monitoring Systems

Description:

The objective of this task is to enhance our chatbot application by providing an optional feature that allows it to connect with monitoring systems. This will enable the chatbot application to periodically report its operational status to the monitoring system by invoking a specific webhook.

Detailed Requirements:

  1. Environment Variable Configuration:

    • Introduce two environment variables: MONITORING_URL and MONITORING_FREQUENCY_CALL.
    • The MONITORING_FREQUENCY_CALL variable should have a default value of 5 (the unit of time this refers to should be defined, e.g., minutes).
  2. Operational Behavior:

    • Upon startup, if the MONITORING_URL variable is set, the chatbot application should begin making calls to the specified URL at intervals determined by the MONITORING_FREQUENCY_CALL.
    • These calls should commence immediately upon the application’s launch and continue at the specified frequency for as long as the application runs.
  3. Logging:

    • Failed attempts to invoke the webhook should be logged, including the response code or type of error encountered. This will assist in troubleshooting and ensure transparency regarding the integration's functionality.
    • Successful calls need not be logged, to prevent cluttering the log files with routine operational entries.
  4. Startup Notification:

    • At application startup, include a log entry that indicates the status of this monitoring feature. This should clearly state whether the feature is active (and if so, the configured values) or inactive. This initial log will serve as immediate feedback on the application’s configuration regarding its integration with external monitoring solutions.

Objective:

By accomplishing this task, we aim to improve the reliability and observability of our chatbot application. This feature will facilitate proactive maintenance and issue resolution by ensuring continuous communication between the chatbot and integrated monitoring systems.

MissingRequirementsError

This is same problem? With blocked IP?
And nothing can do with that? Only change machine?
2024/04/25 14:12:59 stdout Chatgpt4Online: MissingRequirementsError: Install Webdriver packages | pip install -U g4f[webdri�[0m 2024/04/25 14:12:59 stdout Bing: RuntimeError: Empty response: Failed to create conversation 2024/04/25 14:12:59 stdout You: CloudflareError: Response 403: Cloudflare detected

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.