ChatClovaX
This notebook provides a quick overview for getting started with Naver's HyperCLOVA X chat models via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the API reference.
CLOVA Studio has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide documentation.
Overviewβ
Integration detailsβ
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatClovaX | langchain-community | β | beta/β | β |
Model featuresβ
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
β | β | β | β | β | β | β | β | β | β |
Setupβ
Before using the chat model, you must go through the three steps below.
- Creating NAVER Cloud Platform account
- Apply to use CLOVA Studio
- Find API Keys after creating CLOVA Studio Test App or Service App (See here.)
Credentialsβ
CLOVA Studio requires 2 keys (NCP_CLOVASTUDIO_API_KEY
and NCP_APIGW_API_KEY
).
NCP_CLOVASTUDIO_API_KEY
is issued per Test App or Service AppNCP_APIGW_API_KEY
is issued per account
The two API Keys could be found by clicking App Request Status
> Service App, Test App List
> βDetailsβ button for each app
in CLOVA Studio
Installationβ
The LangChain Naver integration lives in the langchain-community
package:
# install package
!pip install -qU langchain-community
Instantiationβ
Now we can instantiate our model object and generate chat completions:
from langchain_community.chat_models import ChatClovaX
llm = ChatClovaX(
model="HCX-DASH-001",
temperature=0.5,
max_tokens=None,
max_retries=2,
# clovastudio_api_key="..." # if you prefer to pass api key in directly instead of using env vars
# task_id="..." # if you want to use fine-tuned model
# service_app=False (default) # True if using service app
# include_ai_filters=False (default) ## True if you want to detect inappropriate content
# other params...
)
Invocationβ
messages = [
(
"system",
"You are a helpful assistant that translates English to Korean. Translate the user sentence.",
),
("human", "I love using NAVER AI."),
]
ai_msg = llm.invoke(messages)
ai_msg
print(ai_msg.content)
Chainingβ
We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}. Translate the user sentence.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "Korean",
"input": "I love using NAVER AI.",
}
)
Additional functionalitiesβ
Fine-tuningβ
You can call fine-tuned CLOVA X models by passing in your corresponding task_id
parameter. (You donβt need to specify the model_name parameter when calling fine-tuned model.)
You can check task_id
from corresponding Test App or Service App details.
fine_tuned_model = ChatClovaX(
task_id='abcd123e',
temperature=0.5,
)
fine_tuned_model(messages)
Service Appβ
When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See here.)
For a Service App, a corresponding NCP_CLOVASTUDIO_API_KEY
is issued and can only be called with the API Key.
#### Setting environment variables
NCP_CLOVASTUDIO_API_KEY="please input your clova studio api key."
NCP_APIGW_API_KEY="please input your NCP gateway api key."
llm = ChatClovaX(
service_app=True, # True if you want to use your service app, default value is False.
# clovastudio_api_key="..." # if you prefer to pass api key in directly instead of using env vars
# apigw_api_key="..." # if you prefer to pass gateway key in directly instead of using env vars
model="HCX-DASH-001",
temperature=0.5,
max_tokens=None,
max_retries=2,
# other params...
)
ai_msg = llm.invoke(messages)
AI Filterβ
AI Filter detects inappropriate output such as profanity from the test app (or service app included) created in Playground and informs the user. See here for details.
llm = ChatClovaX(
model="HCX-DASH-001",
temperature=0.5,
max_tokens=None,
max_retries=2,
include_ai_filters=True, # True if you want to enabled ai filter
# other params...
)
ai_msg = llm.invoke(messages)
print(ai_msg.response_metadata['ai_filter'])
API referenceβ
For detailed documentation of all ChatClovaX features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.naver.ChatClovaX.html