LLPhant is a PHP library for building Generative AI applications in PHP. Created by Maxime Thoonsen, it provides a unified interface for working with multiple LLM providers including OpenAI, Anthropic, Mistral, LM Studio, and Ollama. The library is inspired by LangChain and LLamaIndex, bringing similar patterns to PHP developers.
Multi-Provider LLM Support
LLPhant abstracts away provider differences so you can switch between AI services with minimal code changes. Whether you are using OpenAI's GPT models, Anthropic's Claude, Mistral, LM Studio, or running local models through Ollama, the interface remains consistent:
// OpenAI$chat = new OpenAIChat();$response = $chat->generateText('What is the capital of France?'); // Anthropic Claude$chat = new AnthropicChat(new AnthropicConfig(AnthropicConfig::CLAUDE_3_5_SONNET));$response = $chat->generateText('What is the capital of France?'); // Local models via Ollama$config = new OllamaConfig();$config->model = 'llama2';$chat = new OllamaChat($config);
The library also supports streaming responses for real-time chat interfaces, token usage tracking for cost monitoring, and vision capabilities for image analysis.
Embeddings and Vector Storage
LLPhant includes a complete pipeline for building Retrieval-Augmented Generation (RAG) applications. You can read documents from various sources (PDF, Word, text files), split them into chunks, generate embeddings, and store them in your preferred vector database:
// Read and process documents$reader = new FileDataReader(__DIR__.'/documents');$documents = $reader->getDocuments(); // Split into chunks for embedding$splitDocuments = DocumentSplitter::splitDocuments($documents, 800); // Generate embeddings$embeddingGenerator = new OpenAI3SmallEmbeddingGenerator();$embeddedDocuments = $embeddingGenerator->embedDocuments($splitDocuments); // Store in PostgreSQL with pgvector$vectorStore = new DoctrineVectorStore($entityManager, Document::class);$vectorStore->addDocuments($embeddedDocuments); // Search for similar content$embedding = $embeddingGenerator->embedText('search query');$results = $vectorStore->similaritySearch($embedding, 5);
Vector store support includes Doctrine (PostgreSQL with pgvector), Redis, Elasticsearch, MongoDB, ChromaDB, Qdrant, Milvus, AstraDB, OpenSearch, Pinecone, and Typesense.
Question Answering with RAG
The QuestionAnswering class handles the entire RAG workflow: retrieving relevant documents from your vector store and generating contextualized responses:
use LLPhant\Query\SemanticSearch\QuestionAnswering; $qa = new QuestionAnswering($vectorStore, $embeddingGenerator, $chat); $response = $qa->answerQuestion('What are the main topics covered in the documentation?');
You can customize the system message template to control how the AI uses retrieved context, add guardrails for safety, and implement multi-query transformations to improve retrieval quality.
Function Calling and Tools
LLPhant supports function calling (tools), allowing your LLM to interact with external APIs and services. Define your tools as PHP classes and the LLM can decide when to invoke them based on the conversation context.
You can learn more about LLPhant and find detailed documentation at llphant.readthedocs.org and the GitHub repository.