You can use the Nebius Token Factory third-party integration with Hugging Face. It enables you to explore and test models on the Hugging Face Model Hub and integrate them into your applications by using the Hugging Face SDK. See a list of available models.
Prerequisites
- Create an API key to authorize requests to Nebius Token Factory.
- If you want to work with Nebius Token Factory models in the Hugging Face Model Hub, add the Nebius Token Factory API key to your Hugging Face account:
- Create a Hugging Face account.
- Go to the Inference Providers section in your Hugging Face user account settings.
- In the Nebius Token Factory row, click the key icon in the Routing mode column.
- In the window that opens, enter your Nebius Token Factory API key.
You can choose not to use the Nebius Token Factory API key. In this case, you can still work with the models, but then you pay for their usage via your Hugging Face account.
-
If you want to work with Nebius Token Factory models via the Hugging Face SDK, install the Hugging Face libraries for the programming language that you prefer:
How to work with Nebius Token Factory models in the Hugging Face Model Hub
To work with Nebius Token Factory models:
-
Go to the Models section.
-
Make sure that Nebius Token Factory is selected in the Other → Inference Providers filter.
-
Go to the page of the required model.
-
In the Inference Providers section, select Nebius Token Factory.
-
Interact with the model in the same section. For example, if you selected a text-to-text model, you can send a prompt to it.
If you want to work with a model in the Hugging Face playground where you can change the model settings, click Open playground.
How to work with Nebius Token Factory models via the Hugging Face SDK
Text-to-text models
Use the InferenceClient class by Hugging Face to send a multi-message request.
Specify your API key in this class. You can use either a Nebius Token Factory API key or a Hugging Face API key to work with the SDK. Depending on your choice, you are billed for the model usage via your Nebius Token Factory or Hugging Face account.
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="nebius",
api_key="<your_API_key>"
)
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me an easy ice cream recipe."},
]
completion = client.chat.completions.create(
model="Qwen/Qwen2.5-VL-72B-Instruct",
messages=messages,
max_tokens=500
)
print(completion.choices[0].message.content)
import { InferenceClient } from "@huggingface/inference";
const client = new InferenceClient("<your_API_key>");
const chatCompletion = await client.chatCompletion({
model: "Qwen/Qwen2.5-VL-72B-Instruct",
messages: [
{
role: "system",
content: "You are a helpful assistant."
},
{
role: "user",
content: "Tell me an easy cookie recipe."
}
],
provider: "nebius",
max_tokens: 500
});
// Log the response
console.log(chatCompletion.choices[0].message);
Change this code to fit your needs:
- To work with a different model, change the
model parameter. See a list of available models.
- Modify
messages to specify your questions.
Text-to-image models
Use the InferenceClient class by Hugging Face to generate images.
Specify your API key in this class. You can use either a Nebius Token Factory API key or a Hugging Face API key to work with the SDK. You are billed for model usage in the service whose key you use.
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="nebius",
api_key="<your_API_key>"
)
image = client.text_to_image(
"Map of a space station with cultivated plants and living quarters",
model="stabilityai/stable-diffusion-xl-base-1.0"
)
image.show()
import { InferenceClient } from "@huggingface/inference";
const client = new InferenceClient("<your_API_key>");
const generatedImage = await client.textToImage({
model: "stabilityai/stable-diffusion-xl-base-1.0",
inputs: "Map of a space station with cultivated plants and living quarters",
provider: "nebius"
});
Change this code to fit your needs:
- To work with a different model, change the
model parameter. See a list of available models.
- Modify the description of the image that should be generated.