diff --git a/Checklists.txt b/Checklists.txt index 92d3a06a65..c678af307d 100644 --- a/Checklists.txt +++ b/Checklists.txt @@ -14,9 +14,6 @@ Checklist for Tagging a New Release automatically (if you see a notice about "Trigger TagBot Issue" it does not mean that TagBot isn't working or has an issue, it is just literally a Github issue used to trigger TagBot to run) -- TagBot helpfully explains the differences from the previous - versions in the Github version entry. This can be useful for - updating the NEWS.md file. Checklist for Updating the Version of Documenter.jl Used ---------------------------------------------------------- diff --git a/NDTensors/NEWS.md b/NDTensors/NEWS.md deleted file mode 100644 index 6ad7f73549..0000000000 --- a/NDTensors/NEWS.md +++ /dev/null @@ -1,143 +0,0 @@ -This file is a (mostly) comprehensive list of changes made in each release of NDTensors.jl. For a completely comprehensive but more verbose list, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main/NDTensors). - -While we are in v0.x of the package, we will follow the convention that updating from v0.x.y to v0.x.(y+1) (for example v0.1.15 to v0.1.16) should not break your code, unless you are using internal/undocumented features of the code, while updating from `v0.x.y` to `v0.(x+1).y` might break your code, though we will try to add deprecation warnings when possible, such as for simple cases where the name of a function changes. - -Note that as of Julia v1.5, in order to see deprecation warnings you will need to start Julia with `julia --depwarn=yes` (previously they were on by default). Please run your code like this before upgrading between minor versions of the code (for example from v0.1.41 to v0.2.0). - -After we release v1 of the package, we will start following [semantic versioning](https://semver.org). - -NDTensors v0.1.45 Release Notes -=============================== - -Bugs: - -- HDF5 Support for Diag Storage (#976) - -Enhancements: - -- Fix variable declaration warnings (#994) -- Bump compat to Functors 0.4 (#1031) -- Bump compat to Compat 4 (4facffe) -- Refactor and format (#980) - -NDTensors v0.1.44 Release Notes -=============================== - -Bugs: - -- Fix bug contracting rectangular Diag with Dense (#970) - -Enhancements: - -NDTensors v0.1.43 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Improve functionality for transferring data between CPU and GPU by adding Adapt.jl compatibility (#956) -- Pass kwargs through to truncate in Dense factorizations (#958) - -NDTensors v0.1.42 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Define `map` for Tensor and TensorStorage (b66d1b7) -- Define `real` and `imag` for Tensor (b66d1b7) -- Throw error when trying to do an eigendecomposition of Tensor with Infs or NaNs (b66d1b7) - -NDTensors v0.1.41 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Fix `truncate!` for `Float32`/`ComplexF32` (#926) - -NDTensors v0.1.40 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Add support for `cutoff < 0` and `cutoff = nothing` for disabling truncating according to `cutoff` (#925) -- Define contraction of Diag with Combiner (#920) - -NDTensors v0.1.39 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Fix `svd` and `qr` for empty input left or right indices (#917) - -NDTensors v0.1.38 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Clean up QN `svd` code in `ITensors` by handling QN blocks better in `NDTensors` (#906) -- Clean up `outer` and add GEMM routing for CUDA (#887) - -NDTensors v0.1.37 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Add fallbacks for when LAPACK SVD fails (#885) - -NDTensors v0.1.36 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Change minimal required Julia version from 1.3 to 1.6 (#849) - -NDTensors v0.1.35 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Allow general AbstractArray as data of `Dense` storage `Tensor`/`ITensor` (#848) - -NDTensors v0.1.34 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Define `diag(::Tensor)`, `diag(::ITensor)` (#837) - -NDTensors v0.1.34 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Fix eltype promotion when dividing Tensor by scalar (#813) - -NDTensors v0.1.33 Release Notes -=============================== - -Bugs: - -Enhancements: - -- Use registered subdir version of NDTensors (#780) diff --git a/NDTensors/Project.toml b/NDTensors/Project.toml index cf316f2a65..0be02e02b7 100644 --- a/NDTensors/Project.toml +++ b/NDTensors/Project.toml @@ -1,7 +1,7 @@ name = "NDTensors" uuid = "23ae76d9-e61a-49c4-8f12-3f1a16adf9cf" authors = ["Matthew Fishman "] -version = "0.3.1" +version = "0.3.2" [deps] Accessors = "7d9f7c33-5ae7-4f3b-8dc6-eff91059b697" diff --git a/NDTensors/src/tensor/tensor.jl b/NDTensors/src/tensor/tensor.jl index dbf6f214e7..351e86d9c6 100644 --- a/NDTensors/src/tensor/tensor.jl +++ b/NDTensors/src/tensor/tensor.jl @@ -277,8 +277,6 @@ array(T::Tensor) = array(dense(T)) matrix(T::Tensor{<:Number,2}) = array(T) vector(T::Tensor{<:Number,1}) = array(T) -isempty(T::Tensor) = isempty(storage(T)) - # # Helper functions for BlockSparse-type storage # diff --git a/NDTensors/test/test_blocksparse.jl b/NDTensors/test/test_blocksparse.jl index dda7e7fd01..3ed4379eba 100644 --- a/NDTensors/test/test_blocksparse.jl +++ b/NDTensors/test/test_blocksparse.jl @@ -1,10 +1,28 @@ @eval module $(gensym()) -using NDTensors -using LinearAlgebra: Hermitian, exp, svd -using Test: @testset, @test, @test_throws using GPUArraysCore: @allowscalar +using LinearAlgebra: Hermitian, exp, norm, svd +using NDTensors: + NDTensors, + BlockSparseTensor, + array, + blockdims, + blockoffsets, + blockview, + data, + dense, + dims, + eachnzblock, + inds, + isblocknz, + nnz, + nnzblocks, + randomBlockSparseTensor, + store, + storage include("NDTensorsTestUtils/NDTensorsTestUtils.jl") using .NDTensorsTestUtils: default_rtol, devices_list, is_supported_eltype +using Random: randn! +using Test: @test, @test_throws, @testset @testset "BlockSparseTensor basic functionality" begin C = nothing @@ -26,6 +44,7 @@ using .NDTensorsTestUtils: default_rtol, devices_list, is_supported_eltype @test blockdims(A, (1, 2)) == (2, 5) @test blockdims(A, (2, 1)) == (3, 4) + @test !isempty(A) @test nnzblocks(A) == 2 @test nnz(A) == 2 * 5 + 3 * 4 @test inds(A) == ([2, 3], [4, 5]) @@ -102,6 +121,30 @@ using .NDTensorsTestUtils: default_rtol, devices_list, is_supported_eltype @test conj(data(store(A))) == data(store(conj(A))) @test typeof(conj(A)) <: BlockSparseTensor + @testset "No blocks" begin + T = dev(BlockSparseTensor{elt}(Tuple{Int,Int}[], [2, 2], [2, 2])) + @test nnzblocks(T) == 0 + @test size(T) == (4, 4) + @test length(T) == 16 + @test !isempty(T) + @test isempty(storage(T)) + @test nnz(T) == 0 + @test eltype(T) == elt + @test norm(T) == 0 + end + + @testset "Empty" begin + T = dev(BlockSparseTensor{elt}(Tuple{Int,Int}[], Int[], Int[])) + @test nnzblocks(T) == 0 + @test size(T) == (0, 0) + @test length(T) == 0 + @test isempty(T) + @test isempty(storage(T)) + @test nnz(T) == 0 + @test eltype(T) == elt + @test norm(T) == 0 + end + @testset "Random constructor" begin T = dev(randomBlockSparseTensor(elt, [(1, 1), (2, 2)], ([2, 2], [2, 2]))) @test nnzblocks(T) == 2 diff --git a/docs/src/UpgradeGuide_0.1_to_0.2.md b/docs/src/UpgradeGuide_0.1_to_0.2.md index 9d11c58d0d..9a465243a6 100644 --- a/docs/src/UpgradeGuide_0.1_to_0.2.md +++ b/docs/src/UpgradeGuide_0.1_to_0.2.md @@ -6,7 +6,7 @@ The main breaking changes in ITensor.jl v0.2 involve changes to the `ITensor`, ` In addition, we have moved development of NDTensors.jl into ITensors.jl to simplify the development process until NDTensors is more stable and can be a standalone package. Again, see below for more details. -For a more comprehensive list of changes, see the [change log](https://github.com/ITensor/ITensors.jl/blob/main/NEWS.md) and the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main). +For a more comprehensive list of changes, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main). If you have issues upgrading, please reach out by [raising an issue on Github](https://github.com/ITensor/ITensors.jl/issues/new) or asking a question on the [ITensor support forum](http://itensor.org/support/). diff --git a/docs/src/index.md b/docs/src/index.md index 905cbe4836..b1e9baa9f4 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -35,8 +35,7 @@ Development of ITensor is supported by the Flatiron Institute, a division of the - Jun 09, 2021: ITensors.jl v0.2 has been released, with a few breaking changes as well as a variety of bug fixes and new features. Take a look at the [upgrade guide](https://itensor.github.io/ITensors.jl/stable/UpgradeGuide_0.1_to_0.2.html) -for help upgrading your code as well as the [change log](https://github.com/ITensor/ITensors.jl/blob/main/NEWS.md) -for a comprehensive list of changes. +for help upgrading your code. ## Installation