AI Microservice DEMO
Examples and test tools for our AI services. Use the front-end code as examples to import into your own projects.
🚀 Recommended: Normalized LLM API
Normalized LLM Respond API
The unified, vendor-agnostic endpoint for all LLM interactions. This is the recommended way to interact with AI models through our platform.
- Provider-agnostic: Works with OpenRouter and future providers
- Tools/Function Calling: Full JSON Schema-based tool support
- Structured Streaming: Real-time SSE streaming with token, tool_call, and done events
- Multi-part Content: Supports text and image inputs
- Normalized Responses: Consistent format regardless of provider
- Error Normalization: Standardized error types and messages
- Usage Tracking: Automatic usage recording for billing
Legacy Endpoints
Note: The endpoints below are legacy provider-specific interfaces. For new integrations, please use the Normalized LLM Respond API above, which provides a unified interface with better features and future-proofing.
Legacy Chat & Completion
Chat (Streaming)
Real-time, incremental chat responses.
Chat (No Streaming)
Standard request-response chat.
Single Message
Send a single prompt for a one-off completion.
Single Message (JSON Response)
Get a structured JSON object from a single prompt.
Websocket Chat
Pure WebSocket-based chat for low-latency interaction.
Conversational Assistant
Chat with AI models through OpenRouter with conversation history.