Terminal AI Assistant: Meet TermAI for Efficient Workflows

Streamline your tasks with the new TermAI assistant.

ยท

5 min read

Terminal AI Assistant: Meet TermAI for Efficient Workflows

In the ever-evolving landscape of artificial intelligence, integrating powerful APIs into our daily workflows can significantly enhance productivity and creativity. One such integration is the use of the Gemini API within the TermAI framework, which allows users to harness the capabilities of Google's multimodal generative models directly from the terminal. This blog post will guide you through how to set up and utilize TermAI with the Gemini API and the marked-terminal library for a seamless terminal experience.

Overview of TermAI

If you are a Linux terminal enthusiast like me and don't care for heavy IDEs but want AI in your terminal setup, Integrating TermAI with existing terminal setup.
TermAI is based on the existing gemini-console-chat by Flameface and enables interaction with Gemini through a simple command-line interface. Programmers can ask code-related questions, receive responses, and keep a history of their interactions. The assistant is designed to be lightweight and efficient, making it perfect for developers and tech enthusiasts who prefer working in a terminal environment. Believe me, this is your ultimate free copilot chat in your terminal.

Key Components of the Code

Let's explore the code to understand how TermAI operates. We'll cover the essential libraries, configuration settings, and the step-by-step process that enables TermAI to function effectively. This includes initializing the model, managing chat history, handling user input, and rendering responses. By breaking down these components, you'll gain the knowledge to harness TermAI's full potential in your terminal-based workflows.

1. Libraries

  • GoogleGenerativeAI: This library provides access to the Gemini API, allowing us to send and receive messages.

  • fs: The Node.js file system module, used for reading and writing chat history.

  • chalk: A library for styling terminal output, making it visually appealing.

  • marked and marked-terminal: These libraries are used for rendering Markdown in the terminal, enhancing the readability of responses.

Importing Required Libraries

run = async () => {
    const { GoogleGenerativeAI } = require("@google/generative-ai");
    const fs = require("fs");
    const { default: chalk } = await import("chalk");
    const { marked } = require('marked');
    const { markedTerminal } = require('marked-terminal');
    marked.use(markedTerminal());

2. Configuration

  • MODEL_NAME: Specifies the model to use (in this case, "gemini-1.0-pro").

  • API_KEY: The API key for accessing the Gemini API, stored as an environment variable for security.

  • The GoogleGenerativeAI instance is created using the API key, and the generative model is initialized.

Setting Up the Model

    const MODEL_NAME = "gemini-1.0-pro";
    const API_KEY = process.env.GEMINI_API_KEY; // Environment variable

    const genAI = new GoogleGenerativeAI(API_KEY);
    const model = genAI.getGenerativeModel({ model: MODEL_NAME });

3. Loading Previous Conversations

  • The script checks for an existing history.json file. If it exists, it reads and parses the chat history, allowing users to continue previous conversations seamlessly.

Managing Chat History

    let history = [];

    if (fs.existsSync("history.json")) {
        const historyData = fs.readFileSync("history.json");
        history = JSON.parse(historyData);
    }

4. Greeting the User

  • A friendly greeting message is displayed to the user, styled with chalk for clarity.

  • The chat session is started with the loaded history, allowing the AI to respond contextually.

User Interaction

    console.log(chalk.hex('#00FF00')("Hey " + process.env.TERM_AI_USER + ", how may I help you today?"));

    const chat = model.startChat({
        history: history
    });

5. Input Handling and Loading Animation

  • The input encoding is set to UTF-8 to handle user input correctly.

  • A loading animation is created to provide visual feedback while waiting for the AI's response.

  • The promptUser function displays the user's prompt in the terminal.

Handling User Input

    process.stdin.setEncoding("utf-8");

    const loadingBar = () => {
        const frames = ['|', '/', '-', '\\'];
        let i = 0;
        return setInterval(() => {
            process.stdout.write("\r" + chalk.hex("#FF6161")("TermAI: " + frames[i++ % frames.length]));
        }, 250);
    };

    const promptUser = () => {
        process.stdout.write(chalk.hex("#3776FF")(process.env.TERM_AI_USER + ": "));
    };

    promptUser();

6. Listening for User Input

  • The script listens for user input. If the user types exit, the chat session ends, and the history file is deleted if it exists.

Processing User Messages

    process.stdin.on("data", async (input) => {
        input = input.trim();

        if (input.toLowerCase() === "exit") {
            console.log("Exiting chat.");
            if (fs.existsSync("history.json")) {
                fs.unlinkSync("history.json");
                console.log("History deleted.");
            }
            process.exit();
        }

7. Sending User Input to the AI

  • The user input is trimmed and sent to the Gemini API using the sendMessage method.

  • A loading animation is displayed while waiting for the AI's response.

  • Once the response is received, the loading animation is cleared, and the response is rendered in Markdown format using marked.

  • The conversation history is updated and saved back to history.json.

Sending Messages to the AI

        const loader = loadingBar();
        process.stdin.pause();
        await chat.sendMessage(input).then(async (x) => {
            clearInterval(loader);
            console.log("\r" + chalk.hex("#FF6161")("TermAI:") + "๐Ÿ˜Š")
            process.stdout.write(marked(x.response.text()));

            history.push(
                {
                    role: "user",
                    parts: [{ text: input }]
                },
                {
                    role: "model",
                    parts: [{ text: x.response.text() }]
                }
            );

            fs.writeFileSync("history.json", JSON.stringify(history));
            process.stdin.resume();
            promptUser();
        });
    });
}

Conclusion

The TermAI project provides a powerful and user-friendly way to interact with AI models directly from the terminal. By leveraging the Gemini API, this assistant can handle conversations, maintain context, and provide responses in a visually appealing format. With its simple setup and efficient design, TermAI is an exciting tool for developers and tech enthusiasts looking to integrate AI into their terminal workflows. Whether you want to ask questions, brainstorm ideas, or simply explore the capabilities of AI, TermAI offers a robust solution right at your fingertips. Feel free to explore the TermAI GitHub repository for more examples and documentation, and start your journey into terminal-based AI today!

ย