GithubHelp home page GithubHelp logo

zbailey83 / gemini-ai Goto Github PK

View Code? Open in Web Editor NEW

This project forked from evanzhoudev/gemini-ai

0.0 0.0 0.0 1.5 MB

The easiest way to use the powerful Google Gemini model.

Home Page: https://www.npmjs.com/package/gemini-ai

License: GNU General Public License v3.0

JavaScript 100.00%

gemini-ai's Introduction

Gemini AI Banner

Docs | GitHub | FAQ

Note

With the release of Gemini AI 1.1, there is now streaming support! Check it out here.

Features

Highlights

Gemini AI v1.0 compared to Google's own API

  • Native REST API: Have simplicity without compromises
  • 🚀 Easy: Auto model selection based on context
  • 🎯 Concise: 4x less code needed

Table of Contents

Getting an API Key

  1. Go to Google Makersuite
  2. Click "Get API key" at the top, and follow the steps to get your key
  3. Copy this key, and use it below when API_KEY is mentioned.

Caution

Do not share this key with other people! It is recommended to store it in a .env file.

Quickstart

Make a text request (gemini-pro):

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(await gemini.ask("Hi!"));

Make a streaming text request (gemini-pro):

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

gemini.ask("Hi!", {
	stream: console.log,
});

Chat with Gemini (gemini-pro):

import fs from "fs";

const gemini = new Gemini(API_KEY);
const chat = gemini.createChat();

console.log(await chat.ask("Hi!"));
console.log(await chat.ask("What's the last thing I said?"));

Other useful features

Make a text request with images (gemini-pro-vision):
import fs from "fs";
import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(
	gemini.ask("What's this show?", {
		data: [fs.readFileSync("./test.png")],
	})
);
Make a text request with custom parameters (gemini-pro):
import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(
	gemini.ask("Hello!", {
		temperature: 0.5,
		topP: 1,
		topK: 10,
	})
);
Embed Text (`embedding-001`):
import fs from "fs";

const gemini = new Gemini(API_KEY);

gemini.embed("Hi!");

Special Features

Auto Model Selection

Google has released two models this time for Gemini—gemini-pro, and gemini-pro-vision. The former is text-specific, while the latter is for multimodal use. Gemini AI has been designed so that it will automatically select which model it will use!

Streaming

Here's a quick demo:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

gemini.ask("Write an essay", {
	stream: (x) => process.stdout.write(x),
});

Let's walk through what this code is doing. Like always, we first initialize Gemini. Then, we call the ask function, and provide a stream config. This callback will be invoked whenever there is new content coming in from Gemini!

Note that this automatically cuts to the streamContentGenerate command... you don't have to worry about that!

Note

Realize that you don't need to call ask async if you're handling stream management on your own. If you want to tap the final answer, it still is returned by the method, and you call it async as normal.

Proxy Support

Use a proxy when fetching from Gemini. To keep package size down and adhere to the SRP, the actual proxy handling is delegated to the undici library.

Here's how to add a proxy:

Install undici:

npm i undici

Initialize it with Gemini AI:

import { ProxyAgent } from 'undici'
import Gemini from 'gemini-ai'

let gemini = new Gemini(API_KEY, {
	dispatcher: new ProxyAgent(PROXY_URL)
})

And use as normal!

Documentation

Inititalization

To start any project, include the following lines:

Note

Under the hood, we are just running the Gemini REST API, so there's no fancy authentication going on! Just pure, simple web requests.

// Import Gemini AI
import Gemini from "gemini-ai";

// Initialize your key
const gemini = new Gemini(API_KEY);

Learn how to add a fetch polyfill for the browser here.

Method Patterns

All model calling methods have a main parameter first (typically the text as input), and a config second, as a JSON. A detailed list of all config can be found along with the method. An example call of a function may look like this:

await gemini.ask("Hi!", {
	// Config
	temperature: 0.5,
	topP: 1,
	topK: 10,
});

Note

All methods are async! This means you should call them something like this: await gemini.ask(...)

Note that the output to Gemini.JSON varies depending on the model and command, and is not documented here in detail due to the fact that it is unnecessary to use in most scenarios. You can find more information about the REST API's raw output here.

Gemini.ask()

This method uses the generateContent command to get Gemini's response to your input.

Config available:

Field Name Description Default Value
format Whether to return the detailed, raw JSON output. Typically not recommended, unless you are an expert. Can either be Gemini.JSON or Gemini.TEXT Gemini.TEXT
topP See Google's parameter explanations 0.8
topK See Google's parameter explanations 10
temperature See Google's parameter explanations 1
model Which model to use. Can be any model Google has available, but certain features are not available on some models. Currently: gemini-pro and gemini-pro-vision Automatic based on Context
maxOutputTokens Max tokens to output 800
messages Array of [userInput, modelOutput] pairs to show how the bot is supposed to behave []
data An array of Buffers to input to the model. Automatically toggles model to gemini-pro-vision []
stream A function that is called with every new chunk of JSON or Text (depending on the format) that the model receives. Learn more undefined

Example Usage:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(
	await gemini.ask("Hello!", {
		temperature: 0.5,
		topP: 1,
		topK: 10,
	})
);

Gemini.count()

This method uses the countTokens command to figure out the number of tokens in your input.

Config available:

Field Name Description Default Value
model Which model to use. Can be any model Google has available, but reasonably must be gemini-pro Automatic based on Context

Example Usage:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(await gemini.count("Hello!"));

Gemini.embed()

This method uses the embedContent command (currently only on embedding-001) to generate an embedding matrix for your input.

Config available:

Field Name Description Default Value
model Which model to use. Can be any model Google has available, but reasonably must be embedding-001 embedding-001

Example Usage:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(await gemini.embed("Hello!"));

Gemini.createChat()

Gemini.createChat() is a unique method. For one, it isn't asynchronously called. Additionally, it returns a brand new Chat object. The Chat object only has one method, which is Chat.ask(), which has the exact same syntax as the Gemini.ask() method, documented above. The only small difference is that most parameters are passed into the Chat through createChat(), and cannot be overriden by the ask() method. The only parameters that can be overridden is format, stream, and data (As of 12/13/2023, data is not supported yet).

Important

Google has not yet allowed the use of the gemini-pro-vision model in continued chats yet—The feature is already implemented, to a certain degree, but cannot be used due to Google's API limitations.

All important data in the Chat object is stored in the Chat.messages variable, and can be used to create a new Chat that "continues" the conversation, as will be demoed in the example usage section.

Config available for createChat:

Field Name Description Default Value
topP See Google's parameter explanations 0.8
topK See Google's parameter explanations 10
temperature See Google's parameter explanations 1
model Which model to use. Can be any model Google has available, but certain features are not available on some models. Currently: gemini-pro and gemini-pro-vision Automatic based on Context
maxOutputTokens Max tokens to output 800
messages Array of [userInput, modelOutput] pairs to show how the bot is supposed to behave []

Example Usage:

// Simple example:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

const chat = gemini.createChat();

console.log(await chat.ask("Hi!"));
console.log(await chat.ask("What's the last thing I said?"));
// "Continuing" a conversation:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

const chat = gemini.createChat();

console.log(await chat.ask("Hi!"));

const newChat = gemini.createChat({
	messages: chat.messages,
});

console.log(await newChat.ask("What's the last thing I said?"));

FAQ

Why Gemini AI?

Well, simply put, it makes using Gemini just that much easier... see the code necessary to make a request using Google's own API, compared to Gemini AI:

See the comparison

Google's own API (CommonJS):

const { GoogleGenerativeAI } = require("@google/generative-ai");

const genAI = new GoogleGenerativeAI(API_KEY);

async function run() {
	const model = genAI.getGenerativeModel({ model: "gemini-pro" });

	const prompt = "Hi!";

	const result = await model.generateContent(prompt);
	const response = await result.response;
	const text = response.text();
	console.log(text);
}

run();

Gemini AI (ES6 Modules):

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);
console.log(await gemini.ask("Hi!"));

That's nearly 4 times less code!

I'm in a browser environment! What do I do?

Everything is optimized so it works for both browsers and Node.js—Files are passed as Buffers, so you decide how to get them, and adding a fetch polyfill is as easy as:

import Gemini from "gemini-ai";
import fetch from "node-fetch";

const gemini = new Gemini(API_KEY, {
	fetch: fetch,
});

Contributors

A special shoutout to developers of and contributors to the bard-ai and palm-api libraries. Gemini AI's interface is heavily based on what we have developed on these two projects.

gemini-ai's People

Contributors

evanzhoudev avatar egegungordu avatar woodyzhou avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.