system_architecture
Triepod AI System Architecture
Triepod AI System Architecture
Overview
The Triepod AI system is built on a modular architecture that separates concerns between request routing, workflow management, and agent execution. This document outlines how these components interact to process business requests through various workflows. The system has been designed with containerization in mind, enabling modular deployment, scalability, and maintenance. The architecture also includes comprehensive testing capabilities and knowledge acquisition tools.
Core Components
1. Request Router
The RequestRouter
is responsible for analyzing incoming requests and determining which specialized agent types should process them. It uses keyword matching and confidence scoring to make these decisions.
Key Responsibilities:
- Analyze request content to identify relevant agent types
- Calculate confidence scores for potential agent matches
- Manage the agent processing queue and dependencies
- Track routing history for auditing and improvement
Location: ai/router/request_router.py
2. Utility Components
The system includes several utility components that provide critical infrastructure:
API Key Loader (ai/utils/api_loader.py
):
- Securely loads API keys for external services
- Provides authentication for LLM and vector database operations
- Supports multiple model providers
Configuration Helper (ai/utils/config_helper.py
):
- Provides unified access to configuration settings
- Abstracts between PostgreSQL and JSON storage implementations
- Manages environment-specific configuration
Database Connector (ai/utils/db_connector.py
):
- Manages database connections and schema
- Stores conversation history and workflow artifacts
- Implements vector similarity search functionality
Vector Database Integration:
- Pinecone Connector (
ai/vector_db/pinecone_connector.py
): Manages vector database interactions - Document Processor (
ai/vector_db/document_processor.py
): Prepares documents for vector storage
For detailed information on these utilities, see Utilities Documentation.
3. Workflow Router Connector
The WorkflowRouterConnector
serves as the integration point between the request routing system and the workflow engine. It maps agent types to specific workflows and manages the transition from routing to execution.
Key Responsibilities:
- Map agent types to corresponding workflows
- Initiate workflows based on routing decisions
- Manage handoffs between different workflows
- Track workflow execution history
Location: ai/router/workflow_connector.py
4. Workflow Engine
The WorkflowEngine
manages the execution of business process workflows, tracking phases and ensuring proper progression through business processes.
Key Responsibilities:
- Track workflow states and current phases
- Load phase-specific instructions from prompt files
- Manage transitions between phases
- Store and retrieve workflow artifacts
Location: ai/workflow_engine/workflow_engine.py
5. Process Handlers
Specialized process handlers for specific business processes like Architectural Assessment (AA) and Statement of Work (SOW).
Key Responsibilities:
- Implement process-specific logic
- Manage phase transitions within a process
- Generate phase-specific responses
Locations:
- AA Process:
ai/prompts/phases/aa_approval_handler_agent/
- SOW Process:
ai/prompts/phases/sow_approval_handler_agent/
Request Flow
- A user submits a request to the system
- The
RequestRouter
analyzes the request and determines appropriate agent types - The
WorkflowRouterConnector
maps these agent types to workflows - The
WorkflowEngine
initiates the appropriate workflow - The workflow progresses through predefined phases
- Phase-specific handlers generate appropriate responses
- The workflow may hand off to another workflow (e.g., AA → SOW handoff)
Component Interaction Diagram
Main process flow:
┌───────────────┐ ┌────────────────────┐ ┌─────────────────┐ │ │ │ │ │ │ │ RequestRouter ├─────►│ WorkflowConnector ├─────►│ WorkflowEngine │ │ │ │ │ │ │ └───────────────┘ └────────────────────┘ └────────┬────────┘ │ ▼ ┌───────────────┐ │ │ │ Phase Handler │ │ │ └───────────────┘
Utility components and integrations:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ │ │ │ │ │ │ ApiKeyLoader │────►│DocumentProcessor│────►│PineconeConnector│ │ │ │ │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ │ │ │ │ │ ▼ ▼ ▼ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ │ │ │ │ │ │ ConfigHelper │────►│ WorkflowEngine │────►│PostgresConnector│ │ │ │ │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────┘
Supporting Components
1. Enhanced Testing Framework
The system includes a comprehensive testing framework with test management capabilities.
Key Components:
EnhancedTestRunner
: Manages test execution with filtering and reportingTestManagementDB
: Tracks test results and history in a database- Test decorators for managing test lifecycle and tracking
Key Features:
- Test run history tracking in a database
- Filtering tests by tags or deprecation status
- Detailed HTML and JSON reports
- Test run statistics and analysis
Location: ai/tests/utils/enhanced_test_runner.py
2. Knowledge Acquisition Tools
Tools for acquiring and processing knowledge from external sources.
Key Components:
WebCrawler
: Enhanced web crawler for knowledge base article extractionErrorManager
: Sophisticated error handling and recovery system- Storage interfaces for persisting acquired knowledge
Key Features:
- Authenticated crawling of knowledge base articles
- Robust error handling with recovery strategies
- Automatic artifact generation for documentation
- Modular design with strategy pattern implementation
Location: ai/tools/webcrawler-crawl4ai-enhanced.py
Key Workflows
See the following detailed workflow documents:
Database Integration
The system uses database connections to:
- Store conversation histories
- Track workflow states
- Persist generated artifacts
- Log routing and process decisions
Integration Points
Vector Database Integration
The system uses vector embeddings for semantic search and retrieval of relevant documents:
- Document embeddings are stored in a vector database
- Semantic search is used to find context-relevant information
- Context enhancement for workflow phase processing
External API Integration
The system integrates with external APIs for:
- OpenAI/Anthropic for language model capabilities
- Hubspot for ticket management (not implemented)
- Email systems for client communication
Error Handling and Resilience
The system implements comprehensive error handling:
Core Error Handling
- Graceful failure handling when connections fail
- Meaningful error messages
- Fallback behavior where possible
- Structured logging for debugging and audit
Advanced Error Management
- Error categorization and severity assessment
- Retry mechanisms with exponential backoff
- Recovery strategies for common failure scenarios
- Error statistics and reporting
Development Guidelines
- Always check existing architecture before implementing new features
- Follow the established documentation patterns
- Implement comprehensive testing for all components
- Maintain separation of concerns between routing, workflow management, and processing
- Follow containerization standards when building new services
Containerized Architecture
The Triepod RAG system utilizes a containerized microservices architecture, providing isolation, reproducibility, and simplified deployment of components.
Container Organization
graph TD Client[Client Application] --> API[API Gateway/Router Container] API --> RAG[RAG Engine Container] API --> WF[Workflow Engine Container] RAG --> VDB[Vector DB Connector Container] RAG --> AIR[Airtable API Service Container] VDB --> Pinecone[(Pinecone Vector DB)] AIR --> Airtable[(Airtable)] WF --> DB[(PostgreSQL Container)] subgraph Docker Environment API RAG WF VDB AIR DB end
Containerization Benefits
- Service Isolation: Each functional component runs in its own container, reducing dependency conflicts
- Reproducible Environments: Consistent environments across development, testing, and production
- Scalability: Independent scaling of individual components based on demand
- Simplified Deployment: Standardized deployment process across different environments
For comprehensive containerization details, standards, and implementation guides, refer to the Containerization Guide.
Found this article helpful? Share it with your network.