Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transition to arraycontext #139

Draft
wants to merge 62 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
791dfd3
Initial work towards using array contexts
inducer May 13, 2022
2da9092
update tests to pass actx
alexfikl Sep 3, 2022
3123ec7
sumpy.array_context additions
alexfikl Sep 3, 2022
ecc9d7f
port p2p to arraycontext
alexfikl Sep 3, 2022
2285243
port p2e to arraycontext
alexfikl Sep 3, 2022
c87c528
port e2p to arraycontext
alexfikl Sep 3, 2022
2aa656b
port e2e to arraycontext
alexfikl Sep 3, 2022
57971a5
port tools and toys to arraycontext
alexfikl Sep 3, 2022
1abff07
port test_tools to arraycontext
alexfikl Sep 3, 2022
5d95944
port test_misc to arraycontext
alexfikl Sep 3, 2022
9b4fbde
bump requirements
alexfikl Sep 4, 2022
71e6c65
start porting test_kernels to arraycontext
alexfikl Sep 4, 2022
7086997
more porting in test_kernels
alexfikl Sep 5, 2022
4523020
add arraycontext to docs
alexfikl Sep 5, 2022
aa5a50b
add some annotations to make_loopy_program
alexfikl Sep 5, 2022
bfce329
finish porting in test_kernels
alexfikl Sep 5, 2022
80b1045
port qbx to arraycontext
alexfikl Sep 5, 2022
81b92bd
port curve-pot to arraycontext
alexfikl Sep 5, 2022
5cc1919
add pytools to intersphinx
alexfikl Sep 5, 2022
04bb78a
add assumptions at kernel creation
alexfikl Sep 5, 2022
dd9e7d9
port expansion-toys to arraycontext
alexfikl Sep 5, 2022
5bba7e2
move get_kernel calls to separate line for debugging
alexfikl Sep 6, 2022
2332fd5
add fixed_parameters to make_loopy_program
alexfikl Sep 6, 2022
6da8704
port test_qbx to arraycontext
alexfikl Sep 6, 2022
4aca058
port test_matrixgen to arraycontext
alexfikl Sep 6, 2022
711f938
continue porting fmm to arraycontext
alexfikl Sep 6, 2022
276b42a
update drive_fmm from boxtree
alexfikl Sep 8, 2022
3f8b2b8
Merge branch 'main' into towards-array-context
alexfikl Sep 17, 2022
4c1985d
more work towards getting the fmm working
alexfikl Sep 17, 2022
2c93c4e
fix up fmm tests
alexfikl Sep 17, 2022
ddaa4ad
port distributed to arraycontext
alexfikl Sep 17, 2022
be9c909
add missing actx
alexfikl Sep 18, 2022
fd7e61f
fix kernel return values
alexfikl Sep 21, 2022
c3e35a5
actually loop over all results
alexfikl Sep 21, 2022
f6d6e9d
back up some more dictionary kernel accesses
alexfikl Sep 21, 2022
0fa102f
fix matrix generation
alexfikl Sep 21, 2022
a7bb63a
rip out timing collection
alexfikl Sep 25, 2022
15a15f2
Merge branch 'main' into towards-array-context
alexfikl Sep 25, 2022
567b947
remove unused imports (flake8)
alexfikl Sep 25, 2022
ea7656d
fix return value for form_locals
alexfikl Sep 25, 2022
2fd594d
Merge branch 'main' into towards-array-context
alexfikl Sep 26, 2022
bd5f578
remove ctx arg in KernelComputation
alexfikl Sep 26, 2022
4c3d838
back out some unneeded changes
alexfikl Sep 29, 2022
60e194d
point ci to updated pytential
alexfikl Sep 29, 2022
45a023e
fix kwargs name
alexfikl Sep 29, 2022
39d0757
Merge branch 'main' into towards-array-context
alexfikl Oct 17, 2022
5cbce16
Merge branch 'main' into towards-array-context
alexfikl Oct 30, 2022
e28d295
fix merge
alexfikl Oct 30, 2022
f1efc3a
Merge branch 'main' into towards-array-context
alexfikl Nov 6, 2022
447335d
Merge branch 'main' into towards-array-context
alexfikl Nov 28, 2022
b7df3ce
Merge branch 'main' into towards-array-context
alexfikl Jan 11, 2023
8397a93
Merge branch 'main' into towards-array-context
alexfikl Jan 31, 2023
237dad4
Merge branch 'main' into towards-array-context
alexfikl Apr 4, 2023
6f65431
Merge branch 'main' into towards-array-context
alexfikl Apr 26, 2023
53933e7
docs: add pytools to intersphinx
alexfikl Apr 28, 2023
b6b3e2e
Merge branch 'main' into towards-array-context
alexfikl Jun 16, 2023
b7203e2
fix device handling in p2p
alexfikl Jun 16, 2023
43148e1
Merge branch 'main' into towards-array-context
alexfikl Aug 2, 2023
a791aaf
Merge branch 'main' into towards-array-context
alexfikl Aug 5, 2023
51ac5f5
fix bad merge
alexfikl Aug 5, 2023
d88b9f4
Merge branch 'main' into towards-array-context
alexfikl Oct 17, 2023
4fe4f01
fix bad merge
alexfikl Oct 17, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,8 @@ jobs:
run: |
curl -L -O https://tiker.net/ci-support-v0
. ./ci-support-v0
if [[ "$DOWNSTREAM_PROJECT" == "pytential" && "$GITHUB_HEAD_REF" == "e2p" ]]; then
DOWNSTREAM_PROJECT=https://github.com/isuruf/pytential.git@e2p
if [[ "$DOWNSTREAM_PROJECT" == "pytential" && "$GITHUB_HEAD_REF" == "towards-array-context" ]]; then
DOWNSTREAM_PROJECT=https://github.com/alexfikl/pytential.git@towards-array-context
fi
test_downstream "$DOWNSTREAM_PROJECT"

Expand Down
13 changes: 8 additions & 5 deletions examples/curve-pot.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def draw_pot_figure(aspect_ratio,
knl_kwargs = {}

vol_source_knl, vol_target_knl = process_kernel(knl, what_operator)
p2p = P2P(actx.context,
p2p = P2P(
source_kernels=(vol_source_knl,),
target_kernels=(vol_target_knl,),
exclude_self=False,
Expand All @@ -100,7 +100,7 @@ def draw_pot_figure(aspect_ratio,
lpot_source_knl, lpot_target_knl = process_kernel(knl, what_operator_lpot)

from sumpy.qbx import LayerPotential
lpot = LayerPotential(actx.context,
lpot = LayerPotential(
expansion=expn_class(knl, order=order),
source_kernels=(lpot_source_knl,),
target_kernels=(lpot_target_knl,),
Expand Down Expand Up @@ -183,7 +183,8 @@ def map_to_curve(t):

def apply_lpot(x):
xovsmp = np.dot(fim, x)
evt, (y,) = lpot(actx.queue,
y, = lpot(
actx,
sources,
ovsmp_sources,
actx.from_numpy(centers),
Expand All @@ -208,7 +209,8 @@ def apply_lpot(x):
density = np.cos(mode_nr*2*np.pi*native_t).astype(np.complex128)
strength = actx.from_numpy(native_curve.speed * native_weights * density)

evt, (vol_pot,) = p2p(actx.queue,
vol_pot, = p2p(
actx,
targets,
sources,
[strength], **volpot_kwargs)
Expand All @@ -218,7 +220,8 @@ def apply_lpot(x):
ovsmp_strength = actx.from_numpy(
ovsmp_curve.speed * ovsmp_weights * ovsmp_density)

evt, (curve_pot,) = lpot(actx.queue,
curve_pot, = lpot(
actx,
sources,
ovsmp_sources,
actx.from_numpy(centers),
Expand Down
15 changes: 7 additions & 8 deletions examples/expansion-toys.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ def main():
actx = PyOpenCLArrayContext(queue, force_device_scalars=True)

tctx = t.ToyContext(
actx.context,
# LaplaceKernel(2),
YukawaKernel(2), extra_kernel_kwargs={"lam": 5},
# HelmholtzKernel(2), extra_kernel_kwargs={"k": 0.3},
Expand All @@ -37,22 +36,22 @@ def main():
fp = FieldPlotter([3, 0], extent=8)

if USE_MATPLOTLIB:
t.logplot(fp, pt_src, cmap="jet")
t.logplot(actx, fp, pt_src, cmap="jet")
plt.colorbar()
plt.show()

mexp = t.multipole_expand(pt_src, [0, 0], 5)
mexp2 = t.multipole_expand(mexp, [0, 0.25]) # noqa: F841
lexp = t.local_expand(mexp, [3, 0])
lexp2 = t.local_expand(lexp, [3, 1], 3)
mexp = t.multipole_expand(actx, pt_src, [0, 0], order=5)
mexp2 = t.multipole_expand(actx, mexp, [0, 0.25]) # noqa: F841
lexp = t.local_expand(actx, mexp, [3, 0])
lexp2 = t.local_expand(actx, lexp, [3, 1], order=3)

# diff = mexp - pt_src
# diff = mexp2 - pt_src
diff = lexp2 - pt_src

print(t.l_inf(diff, 1.2, center=lexp2.center))
print(t.l_inf(actx, diff, 1.2, center=lexp2.center))
if USE_MATPLOTLIB:
t.logplot(fp, diff, cmap="jet", vmin=-3, vmax=0)
t.logplot(actx, fp, diff, cmap="jet", vmin=-3, vmax=0)
plt.colorbar()
plt.show()

Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ git+https://github.com/inducer/pytools.git#egg=pytools
git+https://github.com/inducer/pymbolic.git#egg=pymbolic
git+https://github.com/inducer/islpy.git#egg=islpy
git+https://github.com/inducer/pyopencl.git#egg=pyopencl
git+https://github.com/inducer/boxtree.git#egg=boxtree
git+https://github.com/alexfikl/boxtree.git@towards-array-context#egg=boxtree
git+https://github.com/inducer/loopy.git#egg=loopy
git+https://github.com/inducer/arraycontext.git#egg=arraycontext
git+https://github.com/inducer/pyfmmlib.git#egg=pyfmmlib
5 changes: 3 additions & 2 deletions sumpy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@
from sumpy.p2p import P2P, P2PFromCSR
from sumpy.p2e import P2EFromSingleBox, P2EFromCSR
from sumpy.e2p import E2PFromSingleBox, E2PFromCSR
from sumpy.e2e import (E2EFromCSR, E2EFromChildren, E2EFromParent,
from sumpy.e2e import (
E2EFromCSR, E2EFromChildren, E2EFromParent,
M2LUsingTranslationClassesDependentData,
M2LGenerateTranslationClassesDependentData, M2LPreprocessMultipole,
M2LPostprocessLocal)
Expand All @@ -41,7 +42,7 @@
"M2LPreprocessMultipole", "M2LPostprocessLocal"]


code_cache = WriteOncePersistentDict("sumpy-code-cache-v6-"+VERSION_TEXT)
code_cache = WriteOncePersistentDict(f"sumpy-code-cache-v6-{VERSION_TEXT}")


# {{{ optimization control
Expand Down
54 changes: 46 additions & 8 deletions sumpy/array_context.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,33 +20,71 @@
THE SOFTWARE.
"""

from typing import Any, Dict, List, Optional, Union

import numpy as np

from boxtree.array_context import PyOpenCLArrayContext as PyOpenCLArrayContextBase
from arraycontext.pytest import (
_PytestPyOpenCLArrayContextFactoryWithClass,
register_pytest_array_context_factory)
from pytools.tag import ToTagSetConvertible

__doc__ = """
Array Context
-------------

.. autofunction:: make_loopy_program
.. autoclass:: PyOpenCLArrayContext
"""


# {{{ PyOpenCLArrayContext

def make_loopy_program(
domains, statements,
kernel_data: Optional[List[Any]] = None, *,
name: str = "sumpy_loopy_kernel",
silenced_warnings: Optional[Union[List[str], str]] = None,
assumptions: Optional[Union[List[str], str]] = None,
fixed_parameters: Optional[Dict[str, Any]] = None,
index_dtype: Optional["np.dtype"] = None,
tags: ToTagSetConvertible = None):
"""Return a :class:`loopy.LoopKernel` suitable for use with
:meth:`arraycontext.ArrayContext.call_loopy`.
"""
if kernel_data is None:
kernel_data = [...]

if silenced_warnings is None:
silenced_warnings = []

import loopy as lp
from arraycontext.loopy import _DEFAULT_LOOPY_OPTIONS

return lp.make_kernel(
domains,
statements,
kernel_data=kernel_data,
options=_DEFAULT_LOOPY_OPTIONS,
default_offset=lp.auto,
name=name,
lang_version=lp.MOST_RECENT_LANGUAGE_VERSION,
assumptions=assumptions,
fixed_parameters=fixed_parameters,
silenced_warnings=silenced_warnings,
index_dtype=index_dtype,
tags=tags)


class PyOpenCLArrayContext(PyOpenCLArrayContextBase):
def transform_loopy_program(self, t_unit):
default_ep = t_unit.default_entrypoint
options = default_ep.options
return t_unit

if not (options.return_dict and options.no_numpy):
raise ValueError("Loopy kernel passed to call_loopy must "
"have return_dict and no_numpy options set. "
"Did you use arraycontext.make_loopy_program "
"to create this kernel?")

return super().transform_loopy_program(t_unit)
def is_cl_cpu(actx: PyOpenCLArrayContext) -> bool:
import pyopencl as cl
return all(dev.type & cl.device_type.CPU for dev in actx.context.devices)

# }}}

Expand Down
71 changes: 40 additions & 31 deletions sumpy/distributed.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,64 +20,72 @@
THE SOFTWARE.
"""

from boxtree.distributed.calculation import DistributedExpansionWrangler
from boxtree.distributed.calculation import DistributedExpansionWranglerMixin

from sumpy.fmm import SumpyExpansionWrangler
import pyopencl as cl
from sumpy.array_context import PyOpenCLArrayContext


class DistributedSumpyExpansionWrangler(
DistributedExpansionWrangler, SumpyExpansionWrangler):
DistributedExpansionWranglerMixin, SumpyExpansionWrangler):
def __init__(
self, context, comm, tree_indep, local_traversal, global_traversal,
self, actx: PyOpenCLArrayContext,
comm, tree_indep, local_traversal, global_traversal,
dtype, fmm_level_to_order, communicate_mpoles_via_allreduce=False,
**kwarg):
DistributedExpansionWrangler.__init__(
self, context, comm, global_traversal, True,
communicate_mpoles_via_allreduce=communicate_mpoles_via_allreduce)
**kwargs):
SumpyExpansionWrangler.__init__(
self, tree_indep, local_traversal, dtype, fmm_level_to_order, **kwarg)
self, tree_indep, local_traversal, dtype, fmm_level_to_order,
**kwargs)

self.comm = comm
self.traversal_in_device_memory = True
self.global_traversal = global_traversal
self.communicate_mpoles_via_allreduce = communicate_mpoles_via_allreduce

def distribute_source_weights(self, src_weight_vecs, src_idx_all_ranks):
src_weight_vecs_host = [src_weight.get() for src_weight in src_weight_vecs]
def distribute_source_weights(self,
actx: PyOpenCLArrayContext, src_weight_vecs, src_idx_all_ranks):
src_weight_vecs_host = [
actx.to_numpy(src_weight) for src_weight in src_weight_vecs
]

local_src_weight_vecs_host = super().distribute_source_weights(
src_weight_vecs_host, src_idx_all_ranks)
actx, src_weight_vecs_host, src_idx_all_ranks)

local_src_weight_vecs_device = [
cl.array.to_device(src_weight.queue, local_src_weight)
for local_src_weight, src_weight in
zip(local_src_weight_vecs_host, src_weight_vecs)]
actx.from_numpy(local_src_weight)
for local_src_weight in local_src_weight_vecs_host]

return local_src_weight_vecs_device

def gather_potential_results(self, potentials, tgt_idx_all_ranks):
mpi_rank = self.comm.Get_rank()

potentials_host_vec = [potentials_dev.get() for potentials_dev in potentials]
def gather_potential_results(self,
actx: PyOpenCLArrayContext, potentials, tgt_idx_all_ranks):
potentials_host_vec = [
actx.to_numpy(potentials_dev) for potentials_dev in potentials
]

gathered_potentials_host_vec = []
for potentials_host in potentials_host_vec:
gathered_potentials_host_vec.append(
super().gather_potential_results(potentials_host, tgt_idx_all_ranks))
super().gather_potential_results(
actx, potentials_host, tgt_idx_all_ranks))

if mpi_rank == 0:
if self.is_mpi_root:
from pytools.obj_array import make_obj_array
return make_obj_array([
cl.array.to_device(potentials_dev.queue, gathered_potentials_host)
for gathered_potentials_host, potentials_dev in
zip(gathered_potentials_host_vec, potentials)])
actx.from_numpy(gathered_potentials_host)
for gathered_potentials_host in gathered_potentials_host_vec
])
else:
return None

def reorder_sources(self, source_array):
if self.comm.Get_rank() == 0:
return source_array.with_queue(source_array.queue)[
self.global_traversal.tree.user_source_ids]
if self.is_mpi_root:
return source_array[self.global_traversal.tree.user_source_ids]
else:
return source_array

def reorder_potentials(self, potentials):
if self.comm.Get_rank() == 0:
if self.is_mpi_root:
from pytools.obj_array import obj_array_vectorize
import numpy as np
assert (
Expand All @@ -91,8 +99,9 @@ def reorder(x):
else:
return None

def communicate_mpoles(self, mpole_exps, return_stats=False):
mpole_exps_host = mpole_exps.get()
stats = super().communicate_mpoles(mpole_exps_host, return_stats)
def communicate_mpoles(self,
actx: PyOpenCLArrayContext, mpole_exps, return_stats=False):
mpole_exps_host = actx.to_numpy(mpole_exps)
stats = super().communicate_mpoles(actx, mpole_exps_host, return_stats)
mpole_exps[:] = mpole_exps_host
return stats
Loading