Skip to content

Commit

Permalink
Merging
Browse files Browse the repository at this point in the history
  • Loading branch information
sash19 committed Sep 11, 2024
2 parents e81b22d + 66443ad commit 71674bc
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 28 deletions.
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[build-system]
requires = ["setuptools", "versioneer", "tomli; python_version < \"3.11\""]
requires = ["setuptools >= 74.1.2", "versioneer", "tomli; python_version < \"3.11\"", "wheel"]
build-backend = "setuptools.build_meta"

[project]
Expand Down
3 changes: 0 additions & 3 deletions scalable/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,3 @@

__version__ = get_versions()["version"]
del get_versions

from . import _version
__version__ = _version.get_versions()['version']
39 changes: 18 additions & 21 deletions scalable/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,11 @@ def submit(
Parameters
----------
func : callable
Callable to be scheduled as ``func(*args **kwargs)``. If ``func`` returns a
coroutine, it will be run on the main event loop of a worker. Otherwise
``func`` will be run in a worker's task executor pool (see
``Worker.executors`` for more information.)
*args : tuple
Callable to be scheduled as ``func(*args,**kwargs)``. If ``func``
returns a coroutine, it will be run on the main event loop of a
worker. Otherwise ``func`` will be run in a worker's task executor
pool (see ``Worker.executors`` for more information.)
\*args : tuple
Optional positional arguments
key : str
Unique identifier for the task. Defaults to function-name and hash
Expand All @@ -72,29 +72,30 @@ def submit(
may be performed on workers that are not in the `workers` set(s).
actor : bool (default False)
Whether this task should exist on the worker as a stateful actor.
See :doc:`actors` for additional details.
actors : bool (default False)
Alias for `actor`
pure : bool (defaults to True)
Whether or not the function is pure. Set ``pure=False`` for
impure functions like ``np.random.random``. Note that if both
``actor`` and ``pure`` kwargs are set to True, then the value
of ``pure`` will be reverted to False, since an actor is stateful.
See :ref:`pure functions` for more details.
**kwargs
\*\*kwargs : dict
Optional key-value pairs to be passed to the function.
Examples
--------
>>> c = client.submit(add, a, b) # doctest: +SKIP
>>> c = client.submit(add, a, b)
Notes
-----
The current implementation of a task graph resolution searches for occurrences of ``key``
and replaces it with a corresponding ``Future`` result. That can lead to unwanted
substitution of strings passed as arguments to a task if these strings match some ``key``
that already exists on a cluster. To avoid these situations it is required to use unique
values if a ``key`` is set manually.
See https://github.com/dask/dask/issues/9969 to track progress on resolving this issue.
The current implementation of a task graph resolution searches for
occurrences of ``key`` and replaces it with a corresponding ``Future``
result. That can lead to unwanted substitution of strings passed as
arguments to a task if these strings match some ``key`` that already
exists on a cluster. To avoid these situations it is required to use
unique values if a ``key`` is set manually. See
https://github.com/dask/dask/issues/9969 to track progress on resolving
this issue.
Returns
-------
Expand All @@ -105,14 +106,10 @@ def submit(
Raises
------
TypeError
If 'func' is not callable, a TypeError is raised
If 'func' is not callable, a TypeError is raised.
ValueError
If 'allow_other_workers'is True and 'workers' is None, a
ValueError is raised
See Also
--------
Client.map : Submit on many arguments at once
ValueError is raised.
"""
resources = None
if tag is not None:
Expand Down
4 changes: 1 addition & 3 deletions scalable/utilities.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,6 @@ def get_comm_port(logpath=None):

def run_bootstrap():
bootstrap_location = files('scalable').joinpath('scalable_bootstrap.sh')
print(bootstrap_location)
sys.stdout.flush()
result = subprocess.run(["/bin/bash", bootstrap_location], stdin=sys.stdin,
stdout=sys.stdout, stderr=sys.stderr)
if result.returncode != 0:
Expand Down Expand Up @@ -194,7 +192,7 @@ class HardwareResources:
A dictionary containing the available number of cpu cores and the
available amount of memory for each allocated node.
active : dict
A dictionary containing the set of nodes allocated for each job
A dictionary containing the set of nodes allocated for each job
requested by the cluster. The jobid is used as a key to a set object
containing the names of all the allocated nodes.
Expand Down

0 comments on commit 71674bc

Please sign in to comment.