OpenAI Compatibility

Use ASI:One’s API with OpenAI’s client libraries for seamless integration.

Overview

ASI:One’s API is fully compatible with OpenAI’s Chat Completions API format. This means you can use existing OpenAI client libraries and simply change the base URL to start using ASI:One’s agentic models with Agentverse marketplace integration.

API Compatibility

Standard OpenAI Parameters

These parameters work exactly the same as OpenAI’s API:

  • model - Model name (use ASI:One model names)
  • messages - Chat messages array
  • temperature - Sampling temperature (0-2)
  • max_tokens - Maximum tokens in response
  • top_p - Nucleus sampling parameter
  • frequency_penalty - Frequency penalty (-2.0 to 2.0)
  • presence_penalty - Presence penalty (-2.0 to 2.0)
  • stream - Enable streaming responses

ASI:One-Specific Parameters

These ASI:One-specific parameters are also supported:

  • web_search - Enable web search capabilities
  • x-session-id - Session ID for agentic model persistence (header)
  • Tool calling parameters for Agentverse marketplace agent integration

See API Reference for complete parameter details.

Examples with OpenAI SDK

Install the OpenAI library: pip install openai

1# Complete Request & Response
2from openai import OpenAI
3
4client = OpenAI(
5 api_key="YOUR_ASI_ONE_API_KEY",
6 base_url="https://api.asi1.ai/v1"
7)
8
9response = client.chat.completions.create(
10 model="asi1",
11 messages=[
12 {"role": "system", "content": "Be precise and concise."},
13 {"role": "user", "content": "What is agentic AI and how does it work?"}
14 ],
15 temperature=0.2,
16 top_p=0.9,
17 max_tokens=1000,
18 presence_penalty=0,
19 frequency_penalty=0,
20 stream=False,
21 extra_body={
22 "web_search": False
23 }
24)
25
26print(response.choices[0].message.content)
27print(f"Usage: {response.usage}")
1# Agentic Model with Session - Working Example
2import uuid
3from openai import OpenAI
4
5client = OpenAI(
6 api_key="YOUR_ASI_ONE_API_KEY",
7 base_url="https://api.asi1.ai/v1"
8)
9
10# Generate session ID for agentic models
11session_id = str(uuid.uuid4())
12
13print(f"🆔 Session ID: {session_id}")
14print("🔄 Making request to agentic...")
15
16response = client.chat.completions.create(
17 model="asi1",
18 messages=[
19 {"role": "user", "content": "Check latest flights arrival status on Delhi airport."}
20 ],
21 extra_headers={
22 "x-session-id": session_id
23 },
24 temperature=0.7,
25 stream=True
26)
27
28print("📡 Response received, streaming content:\n")
29
30# Handle streaming response safely
31for chunk in response:
32 # Safe check for choices and content
33 if (hasattr(chunk, 'choices') and
34 chunk.choices and
35 len(chunk.choices) > 0 and
36 hasattr(chunk.choices[0], 'delta') and
37 hasattr(chunk.choices[0].delta, 'content') and
38 chunk.choices[0].delta.content):
39
40 print(chunk.choices[0].delta.content, end="")
41
42print("\n\n🏁 Stream completed!")
1# Web Search Integration
2from openai import OpenAI
3
4client = OpenAI(
5 api_key="YOUR_ASI_ONE_API_KEY",
6 base_url="https://api.asi1.ai/v1"
7)
8
9response = client.chat.completions.create(
10 model="asi1",
11 messages=[
12 {"role": "user", "content": "Latest developments in AI research"}
13 ],
14 extra_body={
15 "web_search": True
16 }
17)
18
19print(response.choices[0].message.content)

Understanding the Response Structure

After making a request, your response object includes both standard OpenAI fields and ASI:One-specific fields:

  • choices[0].message.content: The main model response
  • model: The model used
  • usage: Token usage details
  • executable_data: (ASI:One) Agent manifests and tool calls from Agentverse marketplace
  • intermediate_steps: (ASI:One) Multi-step reasoning traces
  • thought: (ASI:One) Model reasoning process
1# Accessing response fields
2print(response.choices[0].message.content) # Main answer
3print(response.model) # Model name
4print(response.usage) # Token usage
5
6# ASI:One specific fields
7
8if hasattr(response, 'executable_data'):
9print(response.executable_data) # Agent calls
10if hasattr(response, 'intermediate_steps'):
11print(response.intermediate_steps) # Reasoning steps
12if hasattr(response, 'thought'):
13print(response.thought) # Model thinking

LangChain Integration

ASI1’s OpenAI compatibility means you can use it directly with LangChain’s ChatOpenAI class. Simply configure the base URL and API key.

Install LangChain: pip install langchain-openai

1# Basic LangChain Integration
2import os
3from langchain_openai import ChatOpenAI
4from langchain_core.messages import HumanMessage, SystemMessage
5
6llm = ChatOpenAI(
7 model="asi1",
8 base_url="https://api.asi1.ai/v1",
9 api_key=os.getenv("ASI_ONE_API_KEY"),
10 temperature=0.7,
11)
12
13messages = [
14 SystemMessage(content="You are a helpful AI assistant."),
15 HumanMessage(content="What is agentic AI?")
16]
17
18response = llm.invoke(messages)
19print(response.content)
1# Streaming with LangChain
2import os
3from langchain_openai import ChatOpenAI
4from langchain_core.messages import HumanMessage
5
6llm = ChatOpenAI(
7 model="asi1",
8 base_url="https://api.asi1.ai/v1",
9 api_key=os.getenv("ASI_ONE_API_KEY"),
10 streaming=True,
11)
12
13for chunk in llm.stream([HumanMessage(content="Explain blockchain in simple terms")]):
14 print(chunk.content, end="", flush=True)
1# Structured Output with Pydantic
2import os
3from langchain_openai import ChatOpenAI
4from langchain_core.messages import HumanMessage
5from pydantic import BaseModel, Field
6
7class MovieRecommendation(BaseModel):
8 """A movie recommendation with details."""
9 title: str = Field(description="The movie title")
10 year: int = Field(description="Release year")
11 genre: str = Field(description="Primary genre")
12 reason: str = Field(description="Why this movie is recommended")
13
14llm = ChatOpenAI(
15 model="asi1",
16 base_url="https://api.asi1.ai/v1",
17 api_key=os.getenv("ASI_ONE_API_KEY"),
18)
19
20# Use with_structured_output for type-safe responses
21structured_llm = llm.with_structured_output(MovieRecommendation)
22
23result = structured_llm.invoke("Recommend a sci-fi movie for someone who loved Inception")
24print(f"Title: {result.title}")
25print(f"Year: {result.year}")
26print(f"Genre: {result.genre}")
27print(f"Reason: {result.reason}")
1# LangChain with Tool Calling
2import os
3from langchain_openai import ChatOpenAI
4from langchain_core.tools import tool
5
6@tool
7def get_weather(city: str) -> str:
8 """Get the current weather for a city."""
9 # In production, call a real weather API
10 return f"The weather in {city} is sunny, 22°C"
11
12@tool
13def search_restaurants(city: str, cuisine: str) -> str:
14 """Search for restaurants in a city by cuisine type."""
15 return f"Found 5 {cuisine} restaurants in {city}"
16
17llm = ChatOpenAI(
18 model="asi1",
19 base_url="https://api.asi1.ai/v1",
20 api_key=os.getenv("ASI_ONE_API_KEY"),
21)
22
23# Bind tools to the model
24llm_with_tools = llm.bind_tools([get_weather, search_restaurants])
25
26response = llm_with_tools.invoke("What's the weather in Tokyo and find me some sushi restaurants there?")
27print(response.tool_calls)

LangChain with Session Persistence

For agentic workflows that require session persistence, pass the session ID via default headers:

1import os
2import uuid
3from langchain_openai import ChatOpenAI
4from langchain_core.messages import HumanMessage
5
6session_id = str(uuid.uuid4())
7
8llm = ChatOpenAI(
9 model="asi1",
10 base_url="https://api.asi1.ai/v1",
11 api_key=os.getenv("ASI_ONE_API_KEY"),
12 default_headers={"x-session-id": session_id},
13)
14
15# First message in session
16response1 = llm.invoke([HumanMessage(content="My name is Alex and I'm planning a trip to Japan")])
17print(response1.content)
18
19# Follow-up in same session - model remembers context
20response2 = llm.invoke([HumanMessage(content="What activities would you recommend for me?")])
21print(response2.content)

Next Steps

Ready to get started with ASI:One’s OpenAI-compatible API? Here’s what to do next:

  1. Get your API key - Sign up and create your ASI:One API key
  2. Try the quickstart - Make your first API call in minutes
  3. Agentverse marketplace - Discover the power of Agentverse marketplace integration
  4. Learn about tool calling - Extend your applications with custom functions