No description
  • PHP 57.6%
  • Vue 19.8%
  • Python 15.5%
  • TypeScript 6.3%
  • CSS 0.6%
  • Other 0.1%
Find a file
2026-02-24 17:08:11 +01:00
.ai chore(guidelines): update workflow rules and mark phase-3 criteria complete 2026-02-18 21:58:30 +01:00
.claude chore(claude): allow npx tsc and add /tmp as additional directory 2026-02-22 20:31:20 +01:00
.cw feat(languages): add importPatterns, embeddingStrategies and embeddingInstruction 2026-02-24 17:06:29 +01:00
.sail chore(docker): switch GPU config from ROCm/Vulkan to NVIDIA 2026-02-24 17:08:01 +01:00
app feat(embedding): increase target limit to 500 and add weighted RRF fusion 2026-02-24 17:06:52 +01:00
bootstrap feat(api): add configurable rate limiting with enable/disable toggle 2026-02-21 15:45:40 +01:00
config feat(rag): tune embedding model and search parameters 2026-02-24 17:07:40 +01:00
cw-cli chore(cw-cli): fix trailing whitespace and add test comment 2026-02-24 17:08:11 +01:00
cw-copilot feat(cw-copilot): add configurable embedding strategies and instruction prefix 2026-02-24 13:08:03 +01:00
database feat(cw-copilot): index imports as single embedding and group retrieved chunks by file 2026-02-22 19:34:42 +01:00
docs docs(tui): update phase-3 status and add phase-4 spec 2026-02-20 01:01:46 +01:00
public feat: updated python cli 2026-02-08 01:51:29 +01:00
resources feat(documents): add semantic search to document detail page 2026-02-19 16:30:46 +01:00
routes feat(settings): add tokens index route 2026-02-22 20:31:14 +01:00
storage starting 2026-01-26 17:27:43 +01:00
tests feat(vllm): implement generate and streamGenerate with FIM support 2026-02-23 17:04:04 +01:00
.editorconfig enhanced web search and fetch 2026-02-05 01:14:04 +01:00
.env.example feat(api): add configurable rate limiting with enable/disable toggle 2026-02-21 15:45:40 +01:00
.gitattributes starting 2026-01-26 17:27:43 +01:00
.gitignore feat: working multi worker 2026-02-09 17:23:55 +01:00
.gitmodules refactor: merged code with rebase and added vllm manager as a submodule 2026-02-16 18:33:27 +01:00
.mcp.json starting 2026-01-26 17:27:43 +01:00
.prettierignore starting 2026-01-26 17:27:43 +01:00
.prettierrc starting 2026-01-26 17:27:43 +01:00
anthropic-error.log feat: added vllm management 2026-02-10 12:01:54 +01:00
artisan starting 2026-01-26 17:27:43 +01:00
boost.json chore(config): update CLAUDE.md and boost.json for new skill set 2026-02-18 11:51:51 +01:00
CLAUDE.md chore(guidelines): update workflow rules and mark phase-3 criteria complete 2026-02-18 21:58:30 +01:00
components.json working on prototype 2026-01-27 16:49:16 +01:00
compose.yaml chore(docker): switch GPU config from ROCm/Vulkan to NVIDIA 2026-02-24 17:08:01 +01:00
composer.json feat: working multi worker 2026-02-09 17:23:55 +01:00
composer.lock feat: working multi worker 2026-02-09 17:23:55 +01:00
eslint.config.js starting 2026-01-26 17:27:43 +01:00
LICENSE Initial commit 2026-01-26 13:01:08 +01:00
package-lock.json build(deps): add chokidar for file watching support 2026-02-17 17:02:59 +01:00
package.json build(deps): add chokidar for file watching support 2026-02-17 17:02:59 +01:00
phpunit.xml fix(rag): replace Mockery with FakeBackend and harden RAG internals 2026-02-17 16:59:08 +01:00
pint.json starting 2026-01-26 17:27:43 +01:00
README.md docs(rag): migrate docs to PostgreSQL+pgvector and add RAG guide 2026-02-17 17:02:20 +01:00
tsconfig.json starting 2026-01-26 17:27:43 +01:00
vite.config.ts feat: adding vllm checks for amd rocm 2026-02-16 18:27:04 +01:00

Chinese Worker

A self-hosted AI agent framework built with Laravel 12. Create intelligent agents with pluggable AI backends, tool execution, web search, and multi-turn conversations.

Features

  • Multi-Backend AI Support - Switch between Ollama (local), Anthropic Claude, or OpenAI
  • Agent Management - Create agents with custom system prompts, tools, and model configurations
  • Multi-Turn Conversations - Stateful conversations with message history and context
  • Tool Execution - Built-in tools (bash, file operations, search) plus custom tool definitions
  • Document Ingestion - Multi-format document processing with text extraction, cleaning, and chunking
  • RAG System - Retrieval-Augmented Generation with pgvector embeddings, hybrid search, and automatic context injection
  • Web Search & Fetch - Integrated SearXNG search and web content extraction
  • Real-Time Streaming - Server-Sent Events for live response streaming
  • System Prompt Templating - Blade-based prompts with variable substitution
  • Queue Processing - Background job processing with Horizon monitoring
  • Context Filter - Automatic context management to prevent overflow with pluggable strategies
  • Modern Frontend - Vue 3 + Inertia.js SPA with Tailwind CSS

Quick Start

# Clone and enter the project
git clone <repository-url> chinese-worker
cd chinese-worker

# Copy environment file
cp .env.example .env

# Start services with Sail
./vendor/bin/sail up -d

# Install dependencies and setup
./vendor/bin/sail composer setup

# Pull an AI model (required for Ollama)
./vendor/bin/sail exec ollama ollama pull llama3.1

# Access the application
open http://localhost

Without Docker

See Installation Guide for production deployment without Docker.

Requirements

  • PHP 8.2+
  • PostgreSQL 14+ with pgvector extension
  • Redis
  • Node.js 20+
  • Ollama, Anthropic API, or OpenAI API (at least one AI backend)
  • SearXNG (optional, for web search)

See Requirements for full details.

Documentation

All documentation is in the docs/guide/ directory:

Getting Started

Setup & Installation

Features

Operations

Reference

Development

# Start all services (PHP, queue, logs, Vite)
composer dev

# Run tests
composer test

# Format code
composer lint

Architecture Overview

┌─────────────────────────────────────────────────────────────────┐
│                         Web UI (Vue 3)                          │
│                      Inertia.js + Tailwind                      │
└─────────────────────────────────────────────────────────────────┘
                                │
┌─────────────────────────────────────────────────────────────────┐
│                        Laravel 12 API                           │
│              Sanctum Auth │ Form Requests │ Resources           │
└─────────────────────────────────────────────────────────────────┘
                                │
   ┌────────────────┬───────────┼───────────┬────────────────┐
   ▼                ▼           ▼           ▼                ▼
┌──────────┐  ┌───────────┐  ┌───────┐  ┌─────────┐  ┌────────────┐
│  Agents  │  │Convers-   │  │ Tools │  │Documents│  │  Search &  │
│  System  │  │ations     │  │ Exec  │  │Extract →│  │  WebFetch  │
│  Prompts │  │ Messages  │  │       │  │Clean →  │  │            │
│          │  │           │  │       │  │Chunk    │  │            │
└──────────┘  └───────────┘  └───────┘  └────┬────┘  └────────────┘
        │             │           │           │              │
        └─────────────┴───────────┼───────────┴──────────────┘
                                  │
┌─────────────────────────────────────────────────────────────────┐
│                     RAG Pipeline                                │
│  EmbeddingService → RetrievalService → RAGContextBuilder        │
│  pgvector (dense) │ FTS (sparse) │ Hybrid (RRF)                │
└─────────────────────────────────────────────────────────────────┘
                                  │
┌─────────────────────────────────────────────────────────────────┐
│                       AIBackendManager                          │
│           Ollama │ vLLM │ Anthropic Claude │ OpenAI             │
└─────────────────────────────────────────────────────────────────┘
                                │
┌─────────────────────────────────────────────────────────────────┐
│                     Background Jobs                             │
│  ProcessConversationTurn │ ProcessDocumentJob │ EmbedChunksJob  │
└─────────────────────────────────────────────────────────────────┘

Stack

Component Technology
Backend Laravel 12, PHP 8.2+
Frontend Vue 3, Inertia.js v2, TypeScript
Styling Tailwind CSS v4
Database PostgreSQL 14+ with pgvector
Cache/Queue Redis
AI Backends Ollama, vLLM, Anthropic, OpenAI
RAG pgvector embeddings, hybrid search (dense + sparse + RRF)
Search SearXNG
Queue Monitor Laravel Horizon
WebSockets Laravel Reverb
Auth Laravel Sanctum + Fortify

License

MIT License