diff --git a/datasetcard.md b/datasetcard.md index 8220fc521..9f7a0a643 100644 --- a/datasetcard.md +++ b/datasetcard.md @@ -110,4 +110,4 @@ train-eval-index: Valid license identifiers can be found in [our docs](https://huggingface.co/docs/hub/repositories-licenses). -For the full dataset card template, see: [https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md). +For the full dataset card template, see: [datasetcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md). diff --git a/docs/hub/datasets-cards.md b/docs/hub/datasets-cards.md index debe84099..834055afc 100644 --- a/docs/hub/datasets-cards.md +++ b/docs/hub/datasets-cards.md @@ -37,7 +37,7 @@ When creating a README.md file in a dataset repository on the Hub, use Metadata -To see metadata fields, see the detailed dataset card metadata specification [here](https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1). +To see metadata fields, see the detailed [Dataset Card specifications](https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1). ### Dataset card creation guide diff --git a/docs/hub/model-card-annotated.md b/docs/hub/model-card-annotated.md index 01ef1134b..ef35ba714 100644 --- a/docs/hub/model-card-annotated.md +++ b/docs/hub/model-card-annotated.md @@ -3,7 +3,7 @@ ## Template -[https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md) +[modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md) ## Directions diff --git a/docs/hub/model-card-guidebook.md b/docs/hub/model-card-guidebook.md index c5df40371..05e168feb 100644 --- a/docs/hub/model-card-guidebook.md +++ b/docs/hub/model-card-guidebook.md @@ -8,7 +8,7 @@ Our work presents a view of where we think model cards stand right now and where With the launch of this Guidebook, we introduce several new resources and connect together previous work on Model Cards: -1) An updated Model Card template, released in [the `huggingface_hub` library](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md), drawing together Model Card work in academia and throughout the industry. +1) An updated Model Card template, released in the `huggingface_hub` library [modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md), drawing together Model Card work in academia and throughout the industry. 2) An [Annotated Model Card Template](./model-card-annotated), which details how to fill the card out. diff --git a/docs/hub/model-cards.md b/docs/hub/model-cards.md index c12c28dfd..6d7823473 100644 --- a/docs/hub/model-cards.md +++ b/docs/hub/model-cards.md @@ -152,7 +152,7 @@ If the license is not available via a URL you can link to a LICENSE stored in th ### Evaluation Results -You can even specify your **model's eval results** in a structured way, which will allow the Hub to parse, display, and even link them to Papers With Code leaderboards. See how to format this data [in the metadata spec](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1). +You can even specify your **model's eval results** in a structured way, which will allow the Hub to parse, display, and even link them to Papers With Code leaderboards. See how to format this data in [Model Card specifications](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1). Here is a partial example (omitting the eval results part): ```yaml diff --git a/docs/hub/models-adding-libraries.md b/docs/hub/models-adding-libraries.md index e4cc6b377..d2a6df00a 100644 --- a/docs/hub/models-adding-libraries.md +++ b/docs/hub/models-adding-libraries.md @@ -88,7 +88,7 @@ We recommend adding a code snippet to explain how to use a model in your downstr -Add a code snippet by updating the [Libraries Typescript file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts) with instructions for your model. For example, the [Asteroid](https://huggingface.co/asteroid-team) integration includes a brief code snippet for how to load and use an Asteroid model: +Add a code snippet by updating the [model-libraries.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts) with instructions for your model. For example, the [Asteroid](https://huggingface.co/asteroid-team) integration includes a brief code snippet for how to load and use an Asteroid model: ```typescript const asteroid = (model: ModelData) => @@ -184,7 +184,7 @@ All third-party libraries are Dockerized, so you can install the dependencies yo ### Register your libraries supported tasks on the hub -To register the tasks supported by your library on the hub you'll need to add a mapping from your library name to its supported tasks in this [file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts). This will ensure the inference API is registered for tasks supported by your model. This file is automatically generated as part of a [GitHub Action](https://github.com/huggingface/api-inference-community/actions/workflows/python-api-export-tasks.yaml) in the [ +To register the tasks supported by your library on the hub you'll need to add a mapping from your library name to its supported tasks in [library-to-tasks.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts). This will ensure the inference API is registered for tasks supported by your model. This file is automatically generated as part of a [GitHub Action](https://github.com/huggingface/api-inference-community/actions/workflows/python-api-export-tasks.yaml) in the [ api-inference-community repository](https://github.com/huggingface/api-inference-community) repository. You can see an example of this [here](https://github.com/huggingface/api-inference-community/actions/runs/5126874210/jobs/9221890853#step:5:8). With these simple but powerful methods, you brought the full functionality of the Hub into your library. Users can download files stored on the Hub from your library with `hf_hub_download`, create repositories with `create_repo`, and upload files with `upload_file`. You also set up Inference API with your library, allowing users to interact with your models on the Hub from inside a browser. \ No newline at end of file diff --git a/docs/hub/models-inference.md b/docs/hub/models-inference.md index 0f9b26b45..278870f11 100644 --- a/docs/hub/models-inference.md +++ b/docs/hub/models-inference.md @@ -21,7 +21,7 @@ Specify `inference: false` in your model card's metadata. ## Why don't I see an inference widget or why can't I use the inference API? For some tasks, there might not be support in the inference API, and, hence, there is no widget. -For all libraries (except πŸ€— Transformers), there is a [mapping](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts) of library to supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has `inference: false` by default. +For all libraries (except πŸ€— Transformers), there is a [library-to-tasks.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts) of supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has `inference: false` by default. ## Can I send large volumes of requests? Can I get accelerated APIs? diff --git a/docs/hub/models-libraries.md b/docs/hub/models-libraries.md index 0e8cd7cea..68ab0c7de 100644 --- a/docs/hub/models-libraries.md +++ b/docs/hub/models-libraries.md @@ -2,7 +2,7 @@ The Hub has support for dozens of libraries in the Open Source ecosystem. Thanks to the `huggingface_hub` Python library, it's easy to enable sharing your models on the Hub. The Hub supports many libraries, and we're working on expanding this support! We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward. -The table below summarizes the supported libraries and their level of integration. Find all our supported libraries [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts)! +The table below summarizes the supported libraries and their level of integration. Find all our supported libraries in [model-libraries.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts)! | Library | Description | Inference API | Widgets | Download from Hub | Push to Hub | |-----------------------------------------------------------------------------|--------------------------------------------------------------------------------------|---|---:|---|---| diff --git a/docs/hub/models-widgets.md b/docs/hub/models-widgets.md index 2a9094cf4..24f450562 100644 --- a/docs/hub/models-widgets.md +++ b/docs/hub/models-widgets.md @@ -86,7 +86,7 @@ widget: - src: nested/directory/sample1.flac ``` -We provide example inputs for some languages and most widget types in [the default-widget-inputs.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/default-widget-inputs.ts). If some examples are missing, we welcome PRs from the community to add them! +We provide example inputs for some languages and most widget types in [default-widget-inputs.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/default-widget-inputs.ts). If some examples are missing, we welcome PRs from the community to add them! ## Example outputs @@ -152,7 +152,7 @@ We can also surface the example outputs in the Hugging Face UI, for instance, fo ## What are all the possible task/widget types? -You can find all the supported tasks [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts). +You can find all the supported tasks in [pipelines.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts). Here are some links to examples: diff --git a/docs/sagemaker/getting-started.md b/docs/sagemaker/getting-started.md index ffcf3f1c8..c0a01df6a 100644 --- a/docs/sagemaker/getting-started.md +++ b/docs/sagemaker/getting-started.md @@ -6,7 +6,7 @@ The get started guide will show you how to quickly use Hugging Face on Amazon Sa -πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/sagemaker-notebook.ipynb) to follow along! +πŸ““ Open the [agemaker-notebook.ipynb file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/sagemaker-notebook.ipynb) to follow along! ## Installation and setup @@ -90,7 +90,7 @@ test_dataset.save_to_disk(test_input_path) Create a Hugging Face Estimator to handle end-to-end SageMaker training and deployment. The most important parameters to pay attention to are: -* `entry_point` refers to the fine-tuning script which you can find [here](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py). +* `entry_point` refers to the fine-tuning script which you can find in [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py). * `instance_type` refers to the SageMaker instance that will be launched. Take a look [here](https://aws.amazon.com/sagemaker/pricing/) for a complete list of instance types. * `hyperparameters` refers to the training hyperparameters the model will be fine-tuned with. diff --git a/docs/sagemaker/inference.md b/docs/sagemaker/inference.md index b5c5f7fc4..6ec743a21 100644 --- a/docs/sagemaker/inference.md +++ b/docs/sagemaker/inference.md @@ -69,7 +69,7 @@ There are two ways to deploy your Hugging Face model trained in SageMaker: - Deploy it after your training has finished. - Deploy your saved model at a later time from S3 with the `model_data`. -πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) for an example of how to deploy a model from S3 to SageMaker for inference. +πŸ““ Open the [deploy_transformer_model_from_s3.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) for an example of how to deploy a model from S3 to SageMaker for inference. ### Deploy after training @@ -243,7 +243,7 @@ After you run our request, you can delete the endpoint again with: predictor.delete_endpoint() ``` -πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb) for an example of how to deploy a model from the πŸ€— Hub to SageMaker for inference. +πŸ““ Open the [deploy_transformer_model_from_hf_hub.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb) for an example of how to deploy a model from the πŸ€— Hub to SageMaker for inference. ## Run batch transform with πŸ€— Transformers and SageMaker @@ -316,7 +316,7 @@ The `input.jsonl` looks like this: {"inputs":"this movie is amazing"} ``` -πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb) for an example of how to run a batch transform job for inference. +πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb) for an example of how to run a batch transform job for inference. ## User defined code and modules diff --git a/docs/sagemaker/train.md b/docs/sagemaker/train.md index 4fc33466a..38251be1b 100644 --- a/docs/sagemaker/train.md +++ b/docs/sagemaker/train.md @@ -93,7 +93,7 @@ if __name__ == "__main__": _Note that SageMaker doesn’t support argparse actions. For example, if you want to use a boolean hyperparameter, specify `type` as `bool` in your script and provide an explicit `True` or `False` value._ -Look [here](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py) for a complete example of a πŸ€— Transformers training script. +Look [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py) for a complete example of a πŸ€— Transformers training script. ## Training Output Management @@ -109,7 +109,7 @@ Run πŸ€— Transformers training scripts on SageMaker by creating a [Hugging Face 1. `entry_point` specifies which fine-tuning script to use. 2. `instance_type` specifies an Amazon instance to launch. Refer [here](https://aws.amazon.com/sagemaker/pricing/) for a complete list of instance types. -3. `hyperparameters` specifies training hyperparameters. View additional available hyperparameters [here](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py). +3. `hyperparameters` specifies training hyperparameters. View additional available hyperparameters in [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py). The following code sample shows how to train with a custom script `train.py` with three hyperparameters (`epochs`, `per_device_train_batch_size`, and `model_name_or_path`): @@ -202,7 +202,7 @@ huggingface_estimator = HuggingFace( ) ``` -πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb) for an example of how to run the data parallelism library with TensorFlow. +πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb) for an example of how to run the data parallelism library with TensorFlow. ### Model parallelism @@ -247,7 +247,7 @@ huggingface_estimator = HuggingFace( ) ``` -πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) for an example of how to run the model parallelism library. +πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) for an example of how to run the model parallelism library. ## Spot instances @@ -288,7 +288,7 @@ huggingface_estimator = HuggingFace( # Managed Spot Training savings: 70.0% ``` -πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/05_spot_instances/sagemaker-notebook.ipynb) for an example of how to use spot instances. +πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/05_spot_instances/sagemaker-notebook.ipynb) for an example of how to use spot instances. ## Git repository diff --git a/modelcard.md b/modelcard.md index 1ad2cea4f..57d5782b5 100644 --- a/modelcard.md +++ b/modelcard.md @@ -47,4 +47,4 @@ model-index: This markdown file contains the spec for the modelcard metadata regarding evaluation parameters. When present, and only then, 'model-index', 'datasets' and 'license' contents will be verified when git pushing changes to your README.md file. Valid license identifiers can be found in [our docs](https://huggingface.co/docs/hub/repositories-licenses). -For the full model card template, see: [https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md). +For the full model card template, see: [modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md).