From 19d19a375df2b7784bcecb7fb107aa5a4ade5ff7 Mon Sep 17 00:00:00 2001 From: Fredrik Ekre Date: Tue, 26 Jul 2022 20:10:25 +0200 Subject: [PATCH] Add documentation about PartitionArrays.jl integration. --- docs/make.jl | 1 + docs/src/hypre-matrix-vector.md | 76 +++++++++++++++++++++++++++++++++ 2 files changed, 77 insertions(+) create mode 100644 docs/src/hypre-matrix-vector.md diff --git a/docs/make.jl b/docs/make.jl index a11fdd2..2767faa 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -16,6 +16,7 @@ makedocs( modules = [HYPRE], pages = Any[ "Home" => "index.md", + "hypre-matrix-vector.md", "libhypre.md", ], draft = liveserver, diff --git a/docs/src/hypre-matrix-vector.md b/docs/src/hypre-matrix-vector.md new file mode 100644 index 0000000..651fa37 --- /dev/null +++ b/docs/src/hypre-matrix-vector.md @@ -0,0 +1,76 @@ +# Matrix/vector representation + +HYPRE.jl defines the structs `HYPREMatrix` and `HYPREVector` representing HYPREs +datastructures. Specifically it uses the [IJ System +Interface](https://hypre.readthedocs.io/en/latest/api-int-ij.html) which can be used for +general sparse matrices. + +HYPRE.jl defines conversion methods from standard Julia datastructures to `HYPREMatrix` and +`HYPREVector`, respectively. + +## PartitionedArrays.jl (multi-process) + +HYPRE.jl integrates seemlessly with `PSparseMatrix` and `PVector` from the +[PartitionedArrays.jl](https://github.com/fverdugo/PartitionedArrays.jl) package. These can +be passed directly to `solve` and `solve!`. Internally this will construct a `HYPREMatrix` +and `HYPREVector`s and then convert the solution back to a `PVector`. + +The `HYPREMatrix` constructor supports both `SparseMatrixCSC` and `SparseMatrixCSR` as +storage backends for the `PSparseMatrix`. However, since HYPREs internal storage is also CSR +based it can be *slightly* more resource efficient to use `SparseMatrixCSR`. + +The constructors also supports both PartitionedArrays.jl backends: When using the `MPI` +backend the communicator of the `PSparseMatrix`/`PVector` is used also for the +`HYPREMatrix`/`HYPREVector`, and when using the `Sequential` backend it is assumed to be a +single-process setup, and the global communicator `MPI.COMM_WORLD` is used. + +**Example pseudocode** + +```julia +# Assemble linear system (see documentation for PartitionedArrays) +A = PSparseMatrix(...) +b = PVector(...) + +# Solve with zero initial guess +x = solve(solver, A, b) + +# Inplace solve with x as initial guess +x = PVector(...) +solve!(solver, x, A, b) +``` + +--- + +It is also possible to construct the arrays explicitly. This can save some resources when +performing multiple consecutive solves (multiple time steps, Newton iterations, etc). To +copy data back and forth between `PSparseMatrix`/`PVector` and `HYPREMatrix`/`HYPREVector` +use the `copy!` function. + +**Example pseudocode** + +```julia +A = PSparseMatrix(...) +x = PVector(...) +b = PVector(...) + +# Construct the HYPRE arrays +A_h = HYPREMatrix(A) +x_h = HYPREVector(x) +b_h = HYPREVector(b) + +# Solve +solve!(solver, x_h, A_h, b_h) + +# Copy solution back to x +copy!(x, x_h) +``` + + +## `SparseMatrixCSC` / `SparseMatrixCSR` (single-process) + + +## `SparseMatrixCSC` / `SparseMatrixCSR` (multi-process) + +!!! warning + This interface isn't finalized yet and is subject to change. +