@ -6,7 +6,12 @@ Interface](https://hypre.readthedocs.io/en/latest/api-int-ij.html) which can be
@@ -6,7 +6,12 @@ Interface](https://hypre.readthedocs.io/en/latest/api-int-ij.html) which can be
general sparse matrices.
HYPRE.jl defines conversion methods from standard Julia datastructures to `HYPREMatrix` and
`HYPREVector`, respectively.
`HYPREVector`, respectively. See the following sections for details:
```@contents
Pages = ["hypre-matrix-vector.md"]
MinDepth = 2
```
## PartitionedArrays.jl (multi-process)
@ -15,11 +20,11 @@ HYPRE.jl integrates seemlessly with `PSparseMatrix` and `PVector` from the
@@ -15,11 +20,11 @@ HYPRE.jl integrates seemlessly with `PSparseMatrix` and `PVector` from the
be passed directly to `solve` and `solve!`. Internally this will construct a `HYPREMatrix`
and `HYPREVector`s and then convert the solution back to a `PVector`.
The `HYPREMatrix` constructor supports both `SparseMatrixCSC` and `SparseMatrixCSR` as
The `HYPREMatrix` constructor support both `SparseMatrixCSC` and `SparseMatrixCSR` as
storage backends for the `PSparseMatrix`. However, since HYPREs internal storage is also CSR
based it can be *slightly* more resource efficient to use `SparseMatrixCSR`.
The constructors also supports both PartitionedArrays.jl backends: When using the `MPI`
The constructors also support both PartitionedArrays.jl backends: When using the `MPI`
backend the communicator of the `PSparseMatrix`/`PVector` is used also for the
`HYPREMatrix`/`HYPREVector`, and when using the `Sequential` backend it is assumed to be a
single-process setup, and the global communicator `MPI.COMM_WORLD` is used.