Eureka AI

Mistral AI Documentation

What is Mistral AI?

Mistral AI is a cutting-edge artificial intelligence service used in Eureka AI, Feedier's AI-powered solutions. It provides robust language models capable of handling a variety of natural language processing (NLP) tasks, including text generation, summarization, and embeddings.

Why do we use Mistral AI?

Mistral AI is a French-developed AI model, making it a preferred choice for Feedier due to its compliance with European data privacy regulations (GDPR) and its strong support for the French and European AI ecosystem. The advantages of using Mistral AI include:

  • Data sovereignty: Ensuring compliance with European regulations by using AI models hosted within the EU.
  • High-performance models: Mistral AI delivers state-of-the-art NLP capabilities for various tasks.
  • Flexibility: It offers multiple models tailored to different use cases, including chat-based completions and embeddings.

Which model should I use?

ModelUse CaseOutput Type
ChatCompletionGeneral text generation and conversationsFull response at once
ChatCompletionStreamStreaming text generation for real-time applicationsIncremental response
EmbeddingText embeddings for search, classification, and ML tasksVector representation

How to use a mission?

Missions are used to interact with Mistral AI. To execute a mission, simply instantiate the class and call the execute() method:

$response = (new ReportName())->execute();

This pattern applies to all mission types, whether it's ChatCompletion, ChatCompletionStream, or Embedding.

Format: TEXT vs JSON

The format() method determines the type of output returned by the AI. There are two possible formats:

  • TEXT: Returns plain text output.
  • JSON: Returns structured JSON output.

Why is this important?

Setting the correct format ensures that the AI understands how to structure its response. If JSON format is required, it is crucial to explicitly mention this in the input prompt and provide an expected output example.

Example with JSON format:

class FeedbackSummary extends ChatCompletion
{
    protected function messages(): array
    {
        return [
            ['role' => 'system', 'content' => 'You are an AI assistant that provides structured feedback summaries. Return your response strictly in JSON format.'],
            ['role' => 'user', 'content' => 'Summarize the following feedback: {{ feedback_text }}. Your response must be a JSON object following this format:
            {
                "summary": "A short summary of the feedback.",
                "key_points": ["Point 1", "Point 2"]
            }']
        ];
    }

    protected function format(): FormatEnum
    {
        return FormatEnum::JSON;
    }

    protected function build(mixed $items): array
    {
        return json_decode(data_get($items, 'choices.0.message.content'), true);
    }
}

By specifying JSON format explicitly, the AI knows to structure the response accordingly. Additionally, parsing the response with json_decode() ensures it is properly handled as an array.

Models

Mistral AI provides three primary models, each serving distinct purposes:

ChatCompletion

ChatCompletion is designed for standard AI-driven conversational tasks. It processes a user input, generates a response, and returns the full response at once.

Example usage:

class ReportName extends ChatCompletion
{
    protected function messages(): array
    {
        return [
            ['role' => 'system', 'content' => 'You are an AI report assistant.'],
            ['role' => 'user', 'content' => 'Generate a report title based on these filters: {{ fql_humanized }}']
        ];
    }

    protected function format(): FormatEnum
    {
        return FormatEnum::TEXT;
    }

    protected function build(mixed $items): string
    {
        return data_get($items, 'choices.0.message.content');
    }
}

ChatCompletionStream

ChatCompletionStream works similarly to ChatCompletion, but instead of returning a full response at once, it streams the AI-generated text in real-time. This is useful for improving user experience when displaying incremental responses.

Example usage:

class ActionPlan extends ChatCompletionStream
{
    protected function messages(): string
    {
        return 'Generate an action plan for the given input.';
    }

    protected function build(mixed $items): string
    {
        return data_get($items, 'choices.0.message.content');
    }
}

Embedding

Embedding is used for transforming text into vector representations, making it useful for semantic search, text classification, and recommendation systems.

Example usage:

class EmbedSource extends Embedding
{
    protected function sources(): array
    {
        return ['This is a sample text to embed.'];
    }

    protected function build(mixed $items): array
    {
        return data_get($items, 'data.0.embedding');
    }
}

Environment Configuration

To use Mistral AI, ensure your .env file contains the following variable:

MISTRAL_API_KEY=your_api_key_here

This key is required to authenticate requests to the Mistral AI API.


This documentation provides an overview of Mistral AI, its models, and how to integrate them into Feedier's AI solutions.

Previous
Report
Next
Agent