Skip to content

Commit

Permalink
♻️ Rebrand: Inference API -> Inference Endpoints (serverless)
Browse files Browse the repository at this point in the history
  • Loading branch information
SBrandeis committed Jan 25, 2024
1 parent de90a9e commit 9a62f6c
Show file tree
Hide file tree
Showing 23 changed files with 107 additions and 105 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,12 @@ await inference.textToImage({

This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.

- [@huggingface/inference](packages/inference/README.md): Use the Inference API to make calls to 100,000+ Machine Learning models, or your own [inference endpoints](https://hf.co/docs/inference-endpoints/)!
- [@huggingface/inference](packages/inference/README.md): Use Inference Endpoints to make calls to 100,000+ Machine Learning models
- [@huggingface/hub](packages/hub/README.md): Interact with huggingface.co to create or delete repos and commit / download files
- [@huggingface/agents](packages/agents/README.md): Interact with HF models through a natural language interface


With more to come, like `@huggingface/endpoints` to manage your HF Endpoints!
With more to come, like `@huggingface/endpoints` to manage your dedicated Inference Endpoints!

We use modern features to avoid polyfills and dependencies, so the libraries will only work on modern browsers / Node.js >= 18 / Bun / Deno.

Expand Down Expand Up @@ -128,7 +128,7 @@ await inference.imageToText({
model: 'nlpconnect/vit-gpt2-image-captioning',
})

// Using your own inference endpoint: https://hf.co/docs/inference-endpoints/
// Using your own dedicated inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2 = inference.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
```
Expand Down
4 changes: 2 additions & 2 deletions docs/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@
isExpanded: true
sections:
- local: inference/README
title: Use the Inference API
title: Use Inference Endpoints
- local: inference/modules
title: API Reference
title: API reference
- title: "@huggingface/hub"
isExpanded: true
sections:
Expand Down
4 changes: 2 additions & 2 deletions packages/agents/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 🤗 Hugging Face Agents.js

A way to call Hugging Face models and inference APIs from natural language, using an LLM.
A way to call Hugging Face models and inference Endpoints from natural language, using an LLM.

## Install

Expand All @@ -25,7 +25,7 @@ Check out the [full documentation](https://huggingface.co/docs/huggingface.js/ag

## Usage

Agents.js leverages LLMs hosted as Inference APIs on HF, so you need to create an account and generate an [access token](https://huggingface.co/settings/tokens).
Agents.js leverages LLMs hosted as Inference Endpoints on HF, so you need to create an account and generate an [access token](https://huggingface.co/settings/tokens).

```ts
import { HfAgent } from "@huggingface/agents";
Expand Down
7 changes: 4 additions & 3 deletions packages/inference/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# 🤗 Hugging Face Inference API
# 🤗 Hugging Face Inference Endpoints

A Typescript powered wrapper for the Hugging Face Inference API. Learn more about the Inference API at [Hugging Face](https://huggingface.co/docs/api-inference/index). It also works with [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index).
A Typescript powered wrapper for the Hugging Face Inference Endpoints API. Learn more about Inference Endpoints at [Hugging Face](https://huggingface.co/inference-endpoints).
It works wither both [serverless](https://huggingface.co/docs/api-inference/index) and [dedicated](https://huggingface.co/docs/inference-endpoints/index) Endpoints.

Check out the [full documentation](https://huggingface.co/docs/huggingface.js/inference/README).

You can also try out a live [interactive notebook](https://observablehq.com/@huggingface/hello-huggingface-js-inference), see some demos on [hf.co/huggingfacejs](https://huggingface.co/huggingfacejs), or watch a [Scrimba tutorial that explains how the Inference API works](https://scrimba.com/scrim/cod8248f5adfd6e129582c523).
You can also try out a live [interactive notebook](https://observablehq.com/@huggingface/hello-huggingface-js-inference), see some demos on [hf.co/huggingfacejs](https://huggingface.co/huggingfacejs), or watch a [Scrimba tutorial that explains how Inference Endpoints works](https://scrimba.com/scrim/cod8248f5adfd6e129582c523).

## Getting Started

Expand Down
4 changes: 2 additions & 2 deletions packages/inference/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"packageManager": "[email protected]",
"license": "MIT",
"author": "Tim Mikeladze <[email protected]>",
"description": "Typescript wrapper for the Hugging Face Inference API",
"description": "Typescript wrapper for the Hugging Face Inference Endpoints API",
"repository": {
"type": "git",
"url": "https://github.com/huggingface/huggingface.js.git"
Expand Down Expand Up @@ -55,4 +55,4 @@
"@types/node": "18.13.0"
},
"resolutions": {}
}
}
2 changes: 1 addition & 1 deletion packages/inference/src/lib/getDefaultTask.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { isUrl } from "./isUrl";

/**
* We want to make calls to the huggingface hub the least possible, eg if
* someone is calling the inference API 1000 times per second, we don't want
* someone is calling Inference Endpoints 1000 times per second, we don't want
* to make 1000 calls to the hub to get the task name.
*/
const taskCache = new Map<string, { task: string; date: Date }>();
Expand Down
2 changes: 1 addition & 1 deletion packages/inference/src/tasks/custom/request.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import type { InferenceTask, Options, RequestArgs } from "../../types";
import { makeRequestOptions } from "../../lib/makeRequestOptions";

/**
* Primitive to make custom calls to the inference API
* Primitive to make custom calls to Inference Endpoints
*/
export async function request<T>(
args: RequestArgs,
Expand Down
4 changes: 2 additions & 2 deletions packages/inference/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ export interface Options {
*/
retry_on_error?: boolean;
/**
* (Default: true). Boolean. There is a cache layer on the inference API to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.
* (Default: true). Boolean. There is a cache layer on Serverless Inference Endpoints to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.
*/
use_cache?: boolean;
/**
Expand Down Expand Up @@ -47,7 +47,7 @@ export interface BaseArgs {
*/
accessToken?: string;
/**
* The model to use. Can be a full URL for HF inference endpoints.
* The model to use. Can be a full URL for a dedicated inference endpoint.
*
* If not specified, will call huggingface.co/api/tasks to get the default model for the task.
*/
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/library-to-tasks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import type { PipelineType } from "./pipelines";

/**
* Mapping from library name (excluding Transformers) to its supported tasks.
* Inference API should be disabled for all other (library, task) pairs beyond this mapping.
* Serverless Inference Endpoints should be disabled for all other (library, task) pairs beyond this mapping.
* As an exception, we assume Transformers supports all inference tasks.
* This mapping is generated automatically by "python-api-export-tasks" action in huggingface/api-inference-community repo upon merge.
* Ref: https://github.com/huggingface/api-inference-community/pull/158
Expand Down
10 changes: 5 additions & 5 deletions packages/tasks/src/model-data.ts
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ export interface ModelData {
*/
widgetData?: WidgetExample[] | undefined;
/**
* Parameters that will be used by the widget when calling Inference API
* Parameters that will be used by the widget when calling Inference Endpoints (serverless)
* https://huggingface.co/docs/api-inference/detailed_parameters
*
* can be set in the model card metadata (under `inference/parameters`)
Expand All @@ -89,10 +89,10 @@ export interface ModelData {
*/
cardData?: {
inference?:
| boolean
| {
parameters?: Record<string, unknown>;
};
| boolean
| {
parameters?: Record<string, unknown>;
};
base_model?: string | string[];
};
/**
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/pipelines.ts
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ export interface PipelineData {
/// This type is used in multiple places in the Hugging Face
/// ecosystem:
/// - To determine which widget to show.
/// - To determine which endpoint of Inference API to use.
/// - To determine which endpoint of Inference Endpoints to use.
/// - As filters at the left of models and datasets page.
///
/// Note that this is sensitive to order.
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/audio-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Datasets such as VoxLingua107 allow anyone to train language identification mode

### Emotion recognition

Emotion recognition is self explanatory. In addition to trying the widgets, you can use the Inference API to perform audio classification. Here is a simple example that uses a [HuBERT](https://huggingface.co/superb/hubert-large-superb-er) model fine-tuned for this task.
Emotion recognition is self explanatory. In addition to trying the widgets, you can use Inference Endpoints to perform audio classification. Here is a simple example that uses a [HuBERT](https://huggingface.co/superb/hubert-large-superb-er) model fine-tuned for this task.

```python
import json
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/audio-to-audio/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ model = SpectralMaskEnhancement.from_hparams(
model.enhance_file("file.wav")
```

Alternatively, you can use the [Inference API](https://huggingface.co/inference-api) to solve this task
Alternatively, you can use [Inference Endpoints](https://huggingface.co/inference-endpoints) to solve this task

```python
import json
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The use of Multilingual ASR has become popular, the idea of maintaining just a s

## Inference

The Hub contains over [~9,000 ASR models](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using the Inference API. Here is a simple code snippet to do exactly this:
The Hub contains over [~9,000 ASR models](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using Inference Endpoints. Here is a simple code snippet to do exactly this:

```python
import json
Expand Down
4 changes: 2 additions & 2 deletions packages/tasks/src/tasks/sentence-similarity/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ You can extract information from documents using Sentence Similarity models. The

The [Sentence Transformers](https://www.sbert.net/) library is very powerful for calculating embeddings of sentences, paragraphs, and entire documents. An embedding is just a vector representation of a text and is useful for finding how similar two texts are.

You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using the Inference API.
You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using Inference Endpoints.

## Task Variants

### Passage Ranking

Passage Ranking is the task of ranking documents based on their relevance to a given query. The task is evaluated on Mean Reciprocal Rank. These models take one query and multiple documents and return ranked documents according to the relevancy to the query. 📄

You can infer with Passage Ranking models using the [Inference API](https://huggingface.co/inference-api). The Passage Ranking model inputs are a query for which we look for relevancy in the documents and the documents we want to search. The model will return scores according to the relevancy of these documents for the query.
You can infer with Passage Ranking models using [Inference Endpoints](https://huggingface.co/inference-endpoints). The Passage Ranking model inputs are a query for which we look for relevancy in the documents and the documents we want to search. The model will return scores according to the relevancy of these documents for the query.

```python
import json
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/tabular-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Tabular classification models can be used in predicting customer churn in teleco

You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:

- Easily use inference API,
- Easily use Inference Endpoints
- Build neat UIs with one line of code,
- Programmatically create model cards,
- Securely serialize your scikit-learn model. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/tabular-regression/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ model.fit(X, y)

You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:

- Easily use inference API,
- Easily use Inference Endpoints,
- Build neat UIs with one line of code,
- Programmatically create model cards,
- Securely serialize your models. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
Expand Down
4 changes: 2 additions & 2 deletions packages/tasks/src/tasks/text-to-speech/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ TTS models are used to create voice assistants on smart devices. These models ar

TTS models are widely used in airport and public transportation announcement systems to convert the announcement of a given text into speech.

## Inference API
## Inference Endpoints

The Hub contains over [1500 TTS models](https://huggingface.co/models?pipeline_tag=text-to-speech&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using the Inference API. Here is a simple code snippet to get you started:
The Hub contains over [1500 TTS models](https://huggingface.co/models?pipeline_tag=text-to-speech&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using Inference Endpoints. Here is a simple code snippet to get you started:

```python
import json
Expand Down
112 changes: 56 additions & 56 deletions packages/widgets/src/hooks.server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,65 +7,65 @@ import { sequence } from "@sveltejs/kit/hooks";
const handleSSO =
env.OAUTH_CLIENT_ID && env.OAUTH_CLIENT_SECRET
? SvelteKitAuth({
// Should be fine as long as your reverse proxy is configured to only accept traffic with the correct host header
trustHost: true,
/**
* SvelteKit has built-in CSRF protection, so we can skip the check
*/
skipCSRFCheck: skipCSRFCheck,
cookies: {
sessionToken: {
name: "session_token",
options: {
httpOnly: true,
sameSite: "lax",
secure: true,
path: "/",
maxAge: 3600, // The OAuth token's lifetime is 3600 seconds
},
// Should be fine as long as your reverse proxy is configured to only accept traffic with the correct host header
trustHost: true,
/**
* SvelteKit has built-in CSRF protection, so we can skip the check
*/
skipCSRFCheck: skipCSRFCheck,
cookies: {
sessionToken: {
name: "session_token",
options: {
httpOnly: true,
sameSite: "lax",
secure: true,
path: "/",
maxAge: 3600, // The OAuth token's lifetime is 3600 seconds
},
},
providers: [
{
name: "Hugging Face",
id: "huggingface",
type: "oidc",
clientId: env.OAUTH_CLIENT_ID,
clientSecret: env.OAUTH_CLIENT_SECRET,
issuer: "https://huggingface.co",
wellKnown: "https://huggingface.co/.well-known/openid-configuration",
/** Add "inference-api" scope and remove "email" scope */
authorization: { params: { scope: "openid profile inference-api" } },
checks: ["state" as never, "pkce" as never],
},
],
secret: env.OAUTH_CLIENT_SECRET,
/**
* Get the access_token without an account in DB, to make calls to the inference API
*/
callbacks: {
jwt({ token, account, profile }) {
return {
...token,
/**
* account & profile are undefined beyond the first login, in those
* cases `token.access_token` and `token.username` are defined
*/
...(account && { access_token: account.access_token }),
...(profile && { username: profile.preferred_username }),
};
},
session({ session, token }) {
return {
...session,
access_token: token.access_token,
user: Object.assign({}, session.user, {
username: token.username,
}),
};
},
},
providers: [
{
name: "Hugging Face",
id: "huggingface",
type: "oidc",
clientId: env.OAUTH_CLIENT_ID,
clientSecret: env.OAUTH_CLIENT_SECRET,
issuer: "https://huggingface.co",
wellKnown: "https://huggingface.co/.well-known/openid-configuration",
/** Add "inference-api" scope and remove "email" scope */
authorization: { params: { scope: "openid profile inference-api" } },
checks: ["state" as never, "pkce" as never],
},
],
secret: env.OAUTH_CLIENT_SECRET,
/**
* Get the access_token without an account in DB, to make calls to Inference Endpoints
*/
callbacks: {
jwt({ token, account, profile }) {
return {
...token,
/**
* account & profile are undefined beyond the first login, in those
* cases `token.access_token` and `token.username` are defined
*/
...(account && { access_token: account.access_token }),
...(profile && { username: profile.preferred_username }),
};
},
session({ session, token }) {
return {
...session,
access_token: token.access_token,
user: Object.assign({}, session.user, {
username: token.username,
}),
};
},
})
},
})
: null;

const handleGlobal: Handle = async ({ event, resolve }) => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
<div class="flex items-center text-lg">
{#if !isDisabled}
<IconLightning classNames="-ml-1 mr-1 text-yellow-500" />
Inference API
Inference Endpoints (serverless)
{:else}
Inference Examples
{/if}
Expand Down
Loading

0 comments on commit 9a62f6c

Please sign in to comment.