-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNXConfig: Add a configuration for all available models #16308
Comments
|
Let me try with |
|
Love the initiative here, thanks for opening an issue! Added the |
Thanks for the label. I don't know if it's easy to begin, but it's cool if more people see this and can contribute! |
I would like to try with Luke. However, Luke doesn't support any features apart from default AutoModel. It's main feature is LukeForEntityPairClassification for relation extraction. Should I convert luke-base to Onnx or LukeForEntityPairClassification which has a classifier head? |
errors
|
When you implement the ONNX Config for a model it's working for all kind of task, because the base model and the ones pre-packaged for fine-tuning have the same inputs. So you can base your implementation on the base model and other tasks will work too. |
|
Still learning |
Issue descriptionHello, thank you for supporting GPTJ with ONNX. But when I exported an ONNX checkpoint using transformers-4.18.0, I got the issue like below.
I found GPTJ with ONNX seems supported when I checked your document transformers-4.18.0 [https://huggingface.co/docs/transformers/serialization#exporting-a-model-for-an-unsupported-architecture] and code [src/transformers/onnx/features.py etc.]. But I still got this issue. And then, I checked the parameter of config.model_type in File "/data/venv/lib/python3.8/site-packages/transformers/onnx/main.py", which is related to two parameters [from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING_NAMES, from ..models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES]. I did not find GPTJ's config in these configs. It seems not sensible. Environment info
|
Hello @pikaqqqqqq, thanks for reporting the problem. I opened a PR with a quick fix to avoid this problem, check #16780 |
|
Hello 👋🏽, I added RoFormer onnx config here #16861, I'm not 100% sure who to ask for review so I'm posting this here. Thanks 🙏🏽 |
Hi! I would like try building the ONNX config for |
Hi @Tanmay06 that would be awesome. Don't hesitate to open a PR with your work when you feel it's quite good. You can ping me anytime if you need help! |
Hello! I would like to work on ONNX config for |
Nice, don't hesitate to ping me if help is needed 🤗 |
Hi! I would like to work on ONNX config for |
Hi, nice! If you need help you can tag me. |
#17027 Here is one for XLNet! |
@chainyo Hi, I would like to work on |
TrOCR and Donut are now supported per #19254 |
@RaghavPrabhakar66 Maybe there is another model you could implement? |
Sure. I can work on |
Can we re-open this? Please @sgugger 🤗 |
@chainyo After gaining some experience with |
@chainyo would love to take up |
@RaghavPrabhakar66 Yes of course! 👍
I don't think so, it's open! 🤗 @BakingBrains |
@chainyo I was working on python -m transformers.onnx onnx --model="google/canine-s" CanineOnnxConfig: class CanineOnnxConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
if self.task == "multiple-choice":
dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"}
else:
dynamic_axis = {0: "batch", 1: "sequence"}
return OrderedDict(
[
("input_ids", dynamic_axis),
("token_type_ids", dynamic_axis),
("attention_mask", dynamic_axis),
]
)
@property
def default_onnx_opset(self) -> int:
return 13
def generate_dummy_inputs(
self,
preprocessor: "PreTrainedTokenizerBase",
batch_size: int = 1,
seq_length: int = 6,
num_choices: int = -1,
is_pair: bool = False,
framework: Optional[TensorType] = None,
tokenizer: "PreTrainedTokenizerBase" = None,
) -> Mapping[str, Any]:
batch_size = compute_effective_axis_dimension(
batch_size, fixed_dimension=OnnxConfig.default_fixed_batch, num_token_to_add=0
)
token_to_add = preprocessor.num_special_tokens_to_add(is_pair)
seq_length = compute_effective_axis_dimension(
seq_length, fixed_dimension=OnnxConfig.default_fixed_sequence, num_token_to_add=token_to_add
)
dummy_inputs = [" ".join(["<unk>"]) * seq_length, " ".join(["<unk>"]) * (seq_length+3)] * batch_size
inputs = dict(preprocessor(dummy_inputs, padding="longest", truncation=True, return_tensors=framework))
return inputs Error: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ │
│ /usr/lib/python3.10/runpy.py:196 in _run_module_as_main │
│ │
│ 193 │ main_globals = sys.modules["__main__"].__dict__ │
│ 194 │ if alter_argv: │
│ 195 │ │ sys.argv[0] = mod_spec.origin │
│ ❱ 196 │ return _run_code(code, main_globals, None, │
│ 197 │ │ │ │ │ "__main__", mod_spec) │
│ 198 │
│ 199 def run_module(mod_name, init_globals=None, │
│ /usr/lib/python3.10/runpy.py:86 in _run_code │
│ │
│ 83 │ │ │ │ │ __loader__ = loader, │
│ 84 │ │ │ │ │ __package__ = pkg_name, │
│ 85 │ │ │ │ │ __spec__ = mod_spec) │
│ ❱ 86 │ exec(code, run_globals) │
│ 87 │ return run_globals │
│ 88 │
│ 89 def _run_module_code(code, init_globals=None, │
│ │
│ /home/luke/dev/huggingface/transformers/src/transformers/onnx/__main__.py:180 in <module> │
│ │
│ 177 if __name__ == "__main__": │
│ 178 │ logger = logging.get_logger("transformers.onnx") # pylint: disable=invalid-name │
│ 179 │ logger.setLevel(logging.INFO) │
│ ❱ 180 │ main() │
│ 181 │
│ │
│ /home/luke/dev/huggingface/transformers/src/transformers/onnx/__main__.py:173 in main │
│ │
│ 170 │ │ if args.atol is None: │
│ 171 │ │ │ args.atol = onnx_config.atol_for_validation │
│ 172 │ │ │
│ ❱ 173 │ │ validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outpu │
│ 174 │ │ logger.info(f"All good, model saved at: {args.output.as_posix()}") │
│ 175 │
│ 176 │
│ │
│ /home/luke/dev/huggingface/transformers/src/transformers/onnx/convert.py:417 in │
│ validate_model_outputs │
│ │
│ 414 │ │ │ onnx_inputs[name] = value.numpy() │
│ 415 │ │
│ 416 │ # Compute outputs from the ONNX model │
│ ❱ 417 │ onnx_outputs = session.run(onnx_named_outputs, onnx_inputs) │
│ 418 │ │
│ 419 │ # Check we have a subset of the keys into onnx_outputs against ref_outputs │
│ 420 │ ref_outputs_set, onnx_outputs_set = set(ref_outputs_dict.keys()), set(onnx_named_out │
│ │
│ /home/luke/dev/huggingface/transformers/venv/lib/python3.10/site-packages/onnxruntime/capi/onnxr │
│ untime_inference_collection.py:200 in run │
│ │
│ 197 │ │ if not output_names: │
│ 198 │ │ │ output_names = [output.name for output in self._outputs_meta] │
│ 199 │ │ try: │
│ ❱ 200 │ │ │ return self._sess.run(output_names, input_feed, run_options) │
│ 201 │ │ except C.EPFail as err: │
│ 202 │ │ │ if self._enable_fallback: │
│ 203 │ │ │ │ print("EP Error: {} using {}".format(str(err), self._providers)) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Concat node. Name:'Concat_1713' Status Message: concat.cc:159 PrepareForCompute Non concat axis dimensions
must match: Axis 2 has mismatched dimensions of 5 and 4 |
Hey @RaghavPrabhakar66, it comes from how you preprocess the |
Hi @chainyo, I would like to take |
Go for it. Feel free to open a PR (one per architecture) once you are done with your implementation! |
The ONNX export is now part of the Closing this issue here, if you want to work on ONNX export, I invite you to go on the optimum repo :-) |
hi I'm working on Swin Transformer |
Hi, Swin is already supported as can be seen here. Also, all ONNX exports are now being discussed here: huggingface/optimum#555 |
Please unsubscibe
…On Sat, Feb 18, 2023, 7:41 PM NielsRogge ***@***.***> wrote:
Hi,
Swin is already supported as can be seen here
<https://github.com/huggingface/transformers/blob/7f1cdf18958efef6339040ba91edb32ae7377720/src/transformers/models/swin/configuration_swin.py#L166>.
Also, all ONNX export is now being discussed here: huggingface/optimum#555
<huggingface/optimum#555>
—
Reply to this email directly, view it on GitHub
<#16308 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AL5G67G7JJJMJ6NUBGDMSNLWYCYN3ANCNFSM5RIQTGKA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Thanks @NielsRogge I'm newcomer and about to start contributing to this repo :) |
@RaghavPrabhakar66 Hi there. Were there any progress on CANINE here? If not, could you summarize what's the particularity out there that needed custom config? Thanks! |
@ozancaglayan Hi, Last time I worked on adding I think its better I will open a PR in |
ISSUE TRANSFER: Optimum repository -> huggingface/optimum#555
This issue is about the working group specially created for this task. If you are interested in helping out, take a look at this organization, or add me on Discord:
ChainYo#3610
We want to contribute to HuggingFace's ONNX implementation for all available models on HF's hub. There are already a lot of architectures implemented for converting PyTorch models to ONNX, but we need more! We need them all!
Feel free to join us in this adventure! Join the org by clicking here
Here is a non-exhaustive list of models that all models available:
🛠️ next to a model suggests that the PR is in progress. If there is nothing next to a model, it means that ONNX does not yet support the model, and thus we need to add support for it.
If you need help implementing an unsupported model, here is a guide from HuggingFace's documentation.
If you want an example of implementation, I did one for CamemBERT months ago.
The text was updated successfully, but these errors were encountered: