Skip to content

Commit

Permalink
Merge branch 'develop' into tensor_parallel
Browse files Browse the repository at this point in the history
  • Loading branch information
Demirrr authored Nov 29, 2024
2 parents f1b263b + 58aa98c commit e0ee128
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 2 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
[![Downloads](https://static.pepy.tech/badge/dicee)](https://pepy.tech/project/dicee)
[![Downloads](https://img.shields.io/pypi/dm/dicee)](https://pypi.org/project/dicee/)
[![Coverage](https://img.shields.io/badge/coverage-54%25-green)](https://dice-group.github.io/dice-embeddings/usage/main.html#coverage-report)
[![Pypi](https://img.shields.io/badge/pypi-0.1.4-blue)](https://pypi.org/project/dicee/0.1.4/)
[![Docs](https://img.shields.io/badge/documentation-0.1.4-yellow)](https://dice-group.github.io/dice-embeddings/index.html)
Expand Down
2 changes: 1 addition & 1 deletion dicee/trainer/model_parallelism.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def increase_batch_size_until_cuda_out_of_memory(ensemble_model, train_loader, b
return batch_sizes_and_mem_usages,True

except torch.OutOfMemoryError as e:
print(f"torch.OutOfMemoryError caught! {e}")
print(f"torch.OutOfMemoryError caught! {e}\n\n")
return batch_sizes_and_mem_usages, False

history_batch_sizes_and_mem_usages=[]
Expand Down
2 changes: 1 addition & 1 deletion examples/multi_hop_query_answering/benchmarking.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
args = Namespace()
args.model = kge_name
args.scoring_technique = "KvsAll"
args.path_dataset_folder = "KGs/UMLS"
args.dataset_dir = "KGs/UMLS"
args.num_epochs = 20
args.batch_size = 1024
args.lr = 0.1
Expand Down

0 comments on commit e0ee128

Please sign in to comment.