diff --git a/.github/ISSUE_TEMPLATE/03_ITensorGPU_bug_report.md b/.github/ISSUE_TEMPLATE/03_ITensorGPU_bug_report.md
deleted file mode 100644
index 7fbd2619a8..0000000000
--- a/.github/ISSUE_TEMPLATE/03_ITensorGPU_bug_report.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-name: ITensorGPU.jl bug report
-about: Create a bug report to help us improve ITensorGPU.jl
-title: "[ITensorGPU] [BUG] YOUR SHORT DESCRIPTION OF THE BUG HERE"
-labels: ["ITensorGPU", "bug"]
-assignees: ''
-
----
-
-**Description of bug**
-
-Please give a brief description of the bug or unexpected behavior here.
-
-**Minimal code demonstrating the bug or unexpected behavior**
-
-If applicable, provide a minimal code that can be run to demonstrate the bug or unexpected behavior.
-
-If you are unable to construct a minimal code that demonstrates the bug or unexpected behavior, provide detailed steps for how to reproduce the behavior you are seeing.
-
-Minimal runnable code
-
-```julia
-[YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Expected output or behavior**
-
-Describe what you expected to happen.
-
-If you provided a minimal code that can be run to demonstrate the bug or unexpected behavior, describe what you expected the output would be.
-
-
-**Actual output or behavior**
-
-Describe what actually happened.
-
-If you provided a minimal code that demonstrates the bug or unexpected behavior, provide the output you get from that code. If the code leads to an error or warning, include the full error or warning below.
-
-Output of minimal runnable code
-
-```julia
-[OUTPUT OF YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Version information**
-
- - Output from `versioninfo()`:
-```julia
-julia> versioninfo()
-[YOUR OUTPUT HERE]
-```
- - Output from `using Pkg; Pkg.status("ITensors")`:
-```julia
-julia> using Pkg; Pkg.status("ITensors")
-[YOUR OUTPUT HERE]
-```
diff --git a/.github/ISSUE_TEMPLATE/03_ITensorGPU_feature_request.md b/.github/ISSUE_TEMPLATE/03_ITensorGPU_feature_request.md
deleted file mode 100644
index 98530af229..0000000000
--- a/.github/ISSUE_TEMPLATE/03_ITensorGPU_feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: ITensorGPU.jl feature request
-about: Suggest an idea for ITensorGPU.jl
-title: "[ITensorGPU] [ENHANCEMENT] YOUR SHORT DESCRIPTION OF THE FEATURE REQUEST HERE"
-labels: ["ITensorGPU", "enhancement"]
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-
-Add any other context or screenshots about the feature request here.
diff --git a/.github/ISSUE_TEMPLATE/04_ITensorGaussianMPS_bug_report.md b/.github/ISSUE_TEMPLATE/04_ITensorGaussianMPS_bug_report.md
deleted file mode 100644
index a774199b5a..0000000000
--- a/.github/ISSUE_TEMPLATE/04_ITensorGaussianMPS_bug_report.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-name: ITensorGaussianMPS.jl bug report
-about: Create a bug report to help us improve ITensorGaussianMPS.jl
-title: "[ITensorGaussianMPS] [BUG] YOUR SHORT DESCRIPTION OF THE BUG HERE"
-labels: ["ITensorGaussianMPS", "bug"]
-assignees: ''
-
----
-
-**Description of bug**
-
-Please give a brief description of the bug or unexpected behavior here.
-
-**Minimal code demonstrating the bug or unexpected behavior**
-
-If applicable, provide a minimal code that can be run to demonstrate the bug or unexpected behavior.
-
-If you are unable to construct a minimal code that demonstrates the bug or unexpected behavior, provide detailed steps for how to reproduce the behavior you are seeing.
-
-Minimal runnable code
-
-```julia
-[YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Expected output or behavior**
-
-Describe what you expected to happen.
-
-If you provided a minimal code that can be run to demonstrate the bug or unexpected behavior, describe what you expected the output would be.
-
-
-**Actual output or behavior**
-
-Describe what actually happened.
-
-If you provided a minimal code that demonstrates the bug or unexpected behavior, provide the output you get from that code. If the code leads to an error or warning, include the full error or warning below.
-
-Output of minimal runnable code
-
-```julia
-[OUTPUT OF YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Version information**
-
- - Output from `versioninfo()`:
-```julia
-julia> versioninfo()
-[YOUR OUTPUT HERE]
-```
- - Output from `using Pkg; Pkg.status("ITensors")`:
-```julia
-julia> using Pkg; Pkg.status("ITensors")
-[YOUR OUTPUT HERE]
-```
diff --git a/.github/ISSUE_TEMPLATE/04_ITensorGaussianMPS_feature_request.md b/.github/ISSUE_TEMPLATE/04_ITensorGaussianMPS_feature_request.md
deleted file mode 100644
index c4f75c0112..0000000000
--- a/.github/ISSUE_TEMPLATE/04_ITensorGaussianMPS_feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: ITensorGaussianMPS.jl feature request
-about: Suggest an idea for ITensorGaussianMPS.jl
-title: "[ITensorGaussianMPS] [ENHANCEMENT] YOUR SHORT DESCRIPTION OF THE FEATURE REQUEST HERE"
-labels: ["ITensorGaussianMPS", "enhancement"]
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-
-Add any other context or screenshots about the feature request here.
diff --git a/.github/ISSUE_TEMPLATE/05_ITensorVisualizationBase_bug_report.md b/.github/ISSUE_TEMPLATE/05_ITensorVisualizationBase_bug_report.md
deleted file mode 100644
index bc7fc64162..0000000000
--- a/.github/ISSUE_TEMPLATE/05_ITensorVisualizationBase_bug_report.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-name: ITensorVisualizationBase.jl bug report
-about: Create a bug report to help us improve ITensorVisualizationBase.jl
-title: "[ITensorVisualizationBase] [BUG] YOUR SHORT DESCRIPTION OF THE BUG HERE"
-labels: ["ITensorVisualizationBase", "bug"]
-assignees: ''
-
----
-
-**Description of bug**
-
-Please give a brief description of the bug or unexpected behavior here.
-
-**Minimal code demonstrating the bug or unexpected behavior**
-
-If applicable, provide a minimal code that can be run to demonstrate the bug or unexpected behavior.
-
-If you are unable to construct a minimal code that demonstrates the bug or unexpected behavior, provide detailed steps for how to reproduce the behavior you are seeing.
-
-Minimal runnable code
-
-```julia
-[YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Expected output or behavior**
-
-Describe what you expected to happen.
-
-If you provided a minimal code that can be run to demonstrate the bug or unexpected behavior, describe what you expected the output would be.
-
-
-**Actual output or behavior**
-
-Describe what actually happened.
-
-If you provided a minimal code that demonstrates the bug or unexpected behavior, provide the output you get from that code. If the code leads to an error or warning, include the full error or warning below.
-
-Output of minimal runnable code
-
-```julia
-[OUTPUT OF YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Version information**
-
- - Output from `versioninfo()`:
-```julia
-julia> versioninfo()
-[YOUR OUTPUT HERE]
-```
- - Output from `using Pkg; Pkg.status("ITensors")`:
-```julia
-julia> using Pkg; Pkg.status("ITensors")
-[YOUR OUTPUT HERE]
-```
diff --git a/.github/ISSUE_TEMPLATE/05_ITensorVisualizationBase_feature_request.md b/.github/ISSUE_TEMPLATE/05_ITensorVisualizationBase_feature_request.md
deleted file mode 100644
index 65142912b7..0000000000
--- a/.github/ISSUE_TEMPLATE/05_ITensorVisualizationBase_feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: ITensorVisualizationBase.jl feature request
-about: Suggest an idea for ITensorVisualizationBase.jl
-title: "[ITensorVisualizationBase] [ENHANCEMENT] YOUR SHORT DESCRIPTION OF THE FEATURE REQUEST HERE"
-labels: ["ITensorVisualizationBase", "enhancement"]
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-
-Add any other context or screenshots about the feature request here.
diff --git a/.github/ISSUE_TEMPLATE/06_ITensorUnicodePlots_bug_report.md b/.github/ISSUE_TEMPLATE/06_ITensorUnicodePlots_bug_report.md
deleted file mode 100644
index 1da5724dd9..0000000000
--- a/.github/ISSUE_TEMPLATE/06_ITensorUnicodePlots_bug_report.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-name: ITensorUnicodePlots.jl bug report
-about: Create a bug report to help us improve ITensorUnicodePlots.jl
-title: "[ITensorUnicodePlots] [BUG] YOUR SHORT DESCRIPTION OF THE BUG HERE"
-labels: ["ITensorUnicodePlots", "bug"]
-assignees: ''
-
----
-
-**Description of bug**
-
-Please give a brief description of the bug or unexpected behavior here.
-
-**Minimal code demonstrating the bug or unexpected behavior**
-
-If applicable, provide a minimal code that can be run to demonstrate the bug or unexpected behavior.
-
-If you are unable to construct a minimal code that demonstrates the bug or unexpected behavior, provide detailed steps for how to reproduce the behavior you are seeing.
-
-Minimal runnable code
-
-```julia
-[YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Expected output or behavior**
-
-Describe what you expected to happen.
-
-If you provided a minimal code that can be run to demonstrate the bug or unexpected behavior, describe what you expected the output would be.
-
-
-**Actual output or behavior**
-
-Describe what actually happened.
-
-If you provided a minimal code that demonstrates the bug or unexpected behavior, provide the output you get from that code. If the code leads to an error or warning, include the full error or warning below.
-
-Output of minimal runnable code
-
-```julia
-[OUTPUT OF YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Version information**
-
- - Output from `versioninfo()`:
-```julia
-julia> versioninfo()
-[YOUR OUTPUT HERE]
-```
- - Output from `using Pkg; Pkg.status("ITensors")`:
-```julia
-julia> using Pkg; Pkg.status("ITensors")
-[YOUR OUTPUT HERE]
-```
diff --git a/.github/ISSUE_TEMPLATE/06_ITensorUnicodePlots_feature_request.md b/.github/ISSUE_TEMPLATE/06_ITensorUnicodePlots_feature_request.md
deleted file mode 100644
index 61fd9aa80a..0000000000
--- a/.github/ISSUE_TEMPLATE/06_ITensorUnicodePlots_feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: ITensorUnicodePlots.jl feature request
-about: Suggest an idea for ITensorUnicodePlots.jl
-title: "[ITensorUnicodePlots] [ENHANCEMENT] YOUR SHORT DESCRIPTION OF THE FEATURE REQUEST HERE"
-labels: ["ITensorUnicodePlots", "enhancement"]
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-
-Add any other context or screenshots about the feature request here.
diff --git a/.github/ISSUE_TEMPLATE/07_ITensorMakie_bug_report.md b/.github/ISSUE_TEMPLATE/07_ITensorMakie_bug_report.md
deleted file mode 100644
index d2a4508988..0000000000
--- a/.github/ISSUE_TEMPLATE/07_ITensorMakie_bug_report.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-name: ITensorMakie.jl bug report
-about: Create a bug report to help us improve ITensorMakie.jl
-title: "[ITensorMakie] [BUG] YOUR SHORT DESCRIPTION OF THE BUG HERE"
-labels: ["ITensorMakie", "bug"]
-assignees: ''
-
----
-
-**Description of bug**
-
-Please give a brief description of the bug or unexpected behavior here.
-
-**Minimal code demonstrating the bug or unexpected behavior**
-
-If applicable, provide a minimal code that can be run to demonstrate the bug or unexpected behavior.
-
-If you are unable to construct a minimal code that demonstrates the bug or unexpected behavior, provide detailed steps for how to reproduce the behavior you are seeing.
-
-Minimal runnable code
-
-```julia
-[YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Expected output or behavior**
-
-Describe what you expected to happen.
-
-If you provided a minimal code that can be run to demonstrate the bug or unexpected behavior, describe what you expected the output would be.
-
-
-**Actual output or behavior**
-
-Describe what actually happened.
-
-If you provided a minimal code that demonstrates the bug or unexpected behavior, provide the output you get from that code. If the code leads to an error or warning, include the full error or warning below.
-
-Output of minimal runnable code
-
-```julia
-[OUTPUT OF YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Version information**
-
- - Output from `versioninfo()`:
-```julia
-julia> versioninfo()
-[YOUR OUTPUT HERE]
-```
- - Output from `using Pkg; Pkg.status("ITensors")`:
-```julia
-julia> using Pkg; Pkg.status("ITensors")
-[YOUR OUTPUT HERE]
-```
diff --git a/.github/ISSUE_TEMPLATE/07_ITensorMakie_feature_request.md b/.github/ISSUE_TEMPLATE/07_ITensorMakie_feature_request.md
deleted file mode 100644
index 8590b1f694..0000000000
--- a/.github/ISSUE_TEMPLATE/07_ITensorMakie_feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: ITensorMakie.jl feature request
-about: Suggest an idea for ITensorMakie.jl
-title: "[ITensorMakie] [ENHANCEMENT] YOUR SHORT DESCRIPTION OF THE FEATURE REQUEST HERE"
-labels: ["ITensorMakie", "enhancement"]
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-
-Add any other context or screenshots about the feature request here.
diff --git a/.github/ISSUE_TEMPLATE/08_ITensorGLMakie_bug_report.md b/.github/ISSUE_TEMPLATE/08_ITensorGLMakie_bug_report.md
deleted file mode 100644
index 4405ece1ad..0000000000
--- a/.github/ISSUE_TEMPLATE/08_ITensorGLMakie_bug_report.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-name: ITensorGLMakie.jl bug report
-about: Create a bug report to help us improve ITensorGLMakie.jl
-title: "[ITensorGLMakie] [BUG] YOUR SHORT DESCRIPTION OF THE BUG HERE"
-labels: ["ITensorGLMakie", "bug"]
-assignees: ''
-
----
-
-**Description of bug**
-
-Please give a brief description of the bug or unexpected behavior here.
-
-**Minimal code demonstrating the bug or unexpected behavior**
-
-If applicable, provide a minimal code that can be run to demonstrate the bug or unexpected behavior.
-
-If you are unable to construct a minimal code that demonstrates the bug or unexpected behavior, provide detailed steps for how to reproduce the behavior you are seeing.
-
-Minimal runnable code
-
-```julia
-[YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Expected output or behavior**
-
-Describe what you expected to happen.
-
-If you provided a minimal code that can be run to demonstrate the bug or unexpected behavior, describe what you expected the output would be.
-
-
-**Actual output or behavior**
-
-Describe what actually happened.
-
-If you provided a minimal code that demonstrates the bug or unexpected behavior, provide the output you get from that code. If the code leads to an error or warning, include the full error or warning below.
-
-Output of minimal runnable code
-
-```julia
-[OUTPUT OF YOUR MINIMAL RUNNABLE CODE HERE]
-```
-
-
-
-**Version information**
-
- - Output from `versioninfo()`:
-```julia
-julia> versioninfo()
-[YOUR OUTPUT HERE]
-```
- - Output from `using Pkg; Pkg.status("ITensors")`:
-```julia
-julia> using Pkg; Pkg.status("ITensors")
-[YOUR OUTPUT HERE]
-```
diff --git a/.github/ISSUE_TEMPLATE/08_ITensorGLMakie_feature_request.md b/.github/ISSUE_TEMPLATE/08_ITensorGLMakie_feature_request.md
deleted file mode 100644
index a97fa92829..0000000000
--- a/.github/ISSUE_TEMPLATE/08_ITensorGLMakie_feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: ITensorGLMakie.jl feature request
-about: Suggest an idea for ITensorGLMakie.jl
-title: "[ITensorGLMakie] [ENHANCEMENT] YOUR SHORT DESCRIPTION OF THE FEATURE REQUEST HERE"
-labels: ["ITensorGLMakie", "enhancement"]
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-
-Add any other context or screenshots about the feature request here.
diff --git a/.github/ISSUE_TEMPLATE/generate_issue_templates/generate_issue_templates.jl b/.github/ISSUE_TEMPLATE/generate_issue_templates/generate_issue_templates.jl
index af1bee1b64..aaaa985ff5 100644
--- a/.github/ISSUE_TEMPLATE/generate_issue_templates/generate_issue_templates.jl
+++ b/.github/ISSUE_TEMPLATE/generate_issue_templates/generate_issue_templates.jl
@@ -2,27 +2,9 @@ using FileUtils
template_package_name = "PACKAGE"
-package_names = [
- "ITensors",
- "NDTensors",
- "ITensorGPU",
- "ITensorGaussianMPS",
- "ITensorVisualizationBase",
- "ITensorUnicodePlots",
- "ITensorMakie",
- "ITensorGLMakie",
-]
+package_names = ["ITensors", "NDTensors"]
-package_ordering = Dict([
- "ITensors" => 1,
- "NDTensors" => 2,
- "ITensorGPU" => 3,
- "ITensorGaussianMPS" => 4,
- "ITensorVisualizationBase" => 5,
- "ITensorUnicodePlots" => 6,
- "ITensorMakie" => 7,
- "ITensorGLMakie" => 8,
-])
+package_ordering = Dict(["ITensors" => 1, "NDTensors" => 2])
function bug_report_file(package_name::String)
return "$(package_name)_bug_report.md"
diff --git a/.github/workflows/CompatHelper.yml b/.github/workflows/CompatHelper.yml
index 5b1520abe6..2a47aec48e 100644
--- a/.github/workflows/CompatHelper.yml
+++ b/.github/workflows/CompatHelper.yml
@@ -12,4 +12,4 @@ jobs:
- name: CompatHelper.main()
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: julia -e 'using CompatHelper; CompatHelper.main(; subdirs=["", "ITensorGaussianMPS", "ITensorGLMakie", "ITensorMakie", "ITensorUnicodePlots", "ITensorVisualizationBase", "NDTensors"])'
+ run: julia -e 'using CompatHelper; CompatHelper.main(; subdirs=["", "NDTensors"])'
diff --git a/.github/workflows/test_itensorgaussianmps_ubuntu.yml b/.github/workflows/test_itensorgaussianmps_ubuntu.yml
deleted file mode 100644
index ad8e6c4f0b..0000000000
--- a/.github/workflows/test_itensorgaussianmps_ubuntu.yml
+++ /dev/null
@@ -1,44 +0,0 @@
-name: Run ITensorGaussianMPS tests (Ubuntu)
-
-on:
- push:
- branches:
- - main
- tags: '*'
- pull_request:
-
-jobs:
- test:
- name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ matrix.threads }} thread(s)
- runs-on: ${{ matrix.os }}
- env:
- JULIA_NUM_THREADS: ${{ matrix.threads }}
- strategy:
- matrix:
- version:
- - '1.6'
- - '1'
- os:
- - ubuntu-latest
- threads:
- - '1'
- arch:
- - x64
- exclude:
- # MacOS not available on x86
- - {os: 'macOS-latest', arch: 'x86'}
- steps:
- - uses: actions/checkout@v2
- - uses: julia-actions/setup-julia@latest
- with:
- version: ${{ matrix.version }}
- arch: ${{ matrix.arch }}
- - name: Install Julia dependencies and run tests
- shell: julia --depwarn=yes {0}
- run: |
- using Pkg;
- Pkg.activate(temp=true)
- Pkg.develop(path="./NDTensors");
- Pkg.develop(path=".");
- Pkg.develop(path="./ITensorGaussianMPS");
- Pkg.test("ITensorGaussianMPS");
diff --git a/.github/workflows/test_itensorglmakie_ubuntu.yml b/.github/workflows/test_itensorglmakie_ubuntu.yml
deleted file mode 100644
index 0cee260fc4..0000000000
--- a/.github/workflows/test_itensorglmakie_ubuntu.yml
+++ /dev/null
@@ -1,49 +0,0 @@
-name: Run ITensorGLMakie tests (Ubuntu)
-
-on:
- push:
- branches:
- - main
- tags: '*'
- pull_request:
-
-jobs:
- test:
- name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ matrix.threads }} thread(s)
- runs-on: ${{ matrix.os }}
- env:
- JULIA_NUM_THREADS: ${{ matrix.threads }}
- strategy:
- matrix:
- version:
- - '1.6'
- - '1'
- os:
- - ubuntu-20.04
- threads:
- - '1'
- arch:
- - x64
- exclude:
- # MacOS not available on x86
- - {os: 'macOS-latest', arch: 'x86'}
- steps:
- - uses: actions/checkout@v2
- - uses: julia-actions/setup-julia@latest
- with:
- version: ${{ matrix.version }}
- arch: ${{ matrix.arch }}
- - uses: actions/cache@v1
- env:
- cache-name: cache-artifacts
- with:
- path: ~/.julia/artifacts
- key: ${{ runner.os }}-test-${{ env.cache-name }}-${{ hashFiles('**/Project.toml') }}
- restore-keys: |
- ${{ runner.os }}-test-${{ env.cache-name }}-
- ${{ runner.os }}-test-
- ${{ runner.os }}-
- - run: sudo apt-get update && sudo apt-get install -y xorg-dev mesa-utils xvfb libgl1 freeglut3-dev libxrandr-dev libxinerama-dev libxcursor-dev libxi-dev libxext-dev
- - name: Install Julia dependencies and run tests
- run: |
- JULIA_REFERENCETESTS_UPDATE=true DISPLAY=:0 xvfb-run -s '-screen 0 1024x768x24' julia --depwarn=yes -e 'using Pkg; Pkg.activate(temp=true); Pkg.develop(path="./NDTensors"); Pkg.develop(path="."); Pkg.develop(path="./ITensorVisualizationBase"); Pkg.develop(path="./ITensorMakie"); Pkg.develop(path="./ITensorGLMakie"); Pkg.test("ITensorGLMakie")'
diff --git a/.github/workflows/test_itensorunicodeplots_ubuntu.yml b/.github/workflows/test_itensorunicodeplots_ubuntu.yml
deleted file mode 100644
index d496ca45c7..0000000000
--- a/.github/workflows/test_itensorunicodeplots_ubuntu.yml
+++ /dev/null
@@ -1,45 +0,0 @@
-name: Run ITensorUnicodePlots tests (Ubuntu)
-
-on:
- push:
- branches:
- - main
- tags: '*'
- pull_request:
-
-jobs:
- test:
- name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ matrix.threads }} thread(s)
- runs-on: ${{ matrix.os }}
- env:
- JULIA_NUM_THREADS: ${{ matrix.threads }}
- strategy:
- matrix:
- version:
- - '1.6'
- - '1'
- os:
- - ubuntu-latest
- threads:
- - '1'
- arch:
- - x64
- exclude:
- # MacOS not available on x86
- - {os: 'macOS-latest', arch: 'x86'}
- steps:
- - uses: actions/checkout@v2
- - uses: julia-actions/setup-julia@latest
- with:
- version: ${{ matrix.version }}
- arch: ${{ matrix.arch }}
- - name: Install Julia dependencies and run tests
- shell: julia --depwarn=yes {0}
- run: |
- using Pkg;
- Pkg.activate(temp=true);
- Pkg.develop(path="./NDTensors");
- Pkg.develop(path=".");
- Pkg.develop(path="./ITensorVisualizationBase");
- Pkg.develop(path="./ITensorUnicodePlots");
- Pkg.test("ITensorUnicodePlots")
diff --git a/.github/workflows/test_itensorvisualization_ubuntu.yml b/.github/workflows/test_itensorvisualization_ubuntu.yml
deleted file mode 100644
index 0c5ac112bc..0000000000
--- a/.github/workflows/test_itensorvisualization_ubuntu.yml
+++ /dev/null
@@ -1,44 +0,0 @@
-name: Run ITensorVisualizationBase tests (Ubuntu)
-
-on:
- push:
- branches:
- - main
- tags: '*'
- pull_request:
-
-jobs:
- test:
- name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ matrix.threads }} thread(s)
- runs-on: ${{ matrix.os }}
- env:
- JULIA_NUM_THREADS: ${{ matrix.threads }}
- strategy:
- matrix:
- version:
- - '1.6'
- - '1'
- os:
- - ubuntu-latest
- threads:
- - '1'
- arch:
- - x64
- exclude:
- # MacOS not available on x86
- - {os: 'macOS-latest', arch: 'x86'}
- steps:
- - uses: actions/checkout@v2
- - uses: julia-actions/setup-julia@latest
- with:
- version: ${{ matrix.version }}
- arch: ${{ matrix.arch }}
- - name: Install Julia dependencies and run tests
- shell: julia --depwarn=yes {0}
- run: |
- using Pkg;
- Pkg.activate(temp=true);
- Pkg.develop(path="./NDTensors");
- Pkg.develop(path=".");
- Pkg.develop(path="./ITensorVisualizationBase");
- Pkg.test("ITensorVisualizationBase")
diff --git a/.gitignore b/.gitignore
index ac4095b6a1..0036a0b19d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -9,16 +9,6 @@ benchmark/mult
benchmark/*.json
docs/Manifest.toml
docs/build/
-ITensorUnicodePlots/Manifest.toml
-ITensorUnicodePlots/test/Manifest.toml
-ITensorMakie/Manifest.toml
-ITensorMakie/test/Manifest.toml
-ITensorGLMakie/Manifest.toml
-ITensorGLMakie/test/Manifest.toml
-ITensorGPU/Manifest.toml
-ITensorGPU/test/Manifest.toml
-ITensorVisualizationBase/Manifest.toml
-ITensorVisualizationBase/test/Manifest.toml
NDTensors/Manifest.toml
NDTensors/test/Manifest.toml
precompile/tmp
diff --git a/ITensorGLMakie/.JuliaFormatter.toml b/ITensorGLMakie/.JuliaFormatter.toml
deleted file mode 100644
index 08f664cdb9..0000000000
--- a/ITensorGLMakie/.JuliaFormatter.toml
+++ /dev/null
@@ -1,2 +0,0 @@
-style = "blue"
-indent = 2
diff --git a/ITensorGLMakie/LICENSE b/ITensorGLMakie/LICENSE
deleted file mode 100644
index 555297e50a..0000000000
--- a/ITensorGLMakie/LICENSE
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
diff --git a/ITensorGLMakie/NEWS.md b/ITensorGLMakie/NEWS.md
deleted file mode 100644
index 3c5d0dab23..0000000000
--- a/ITensorGLMakie/NEWS.md
+++ /dev/null
@@ -1,25 +0,0 @@
-This file is a (mostly) comprehensive list of changes made in each release of ITensorGLMakie.jl. For a completely comprehensive but more verbose list, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main/ITensorGLMakie).
-
-While we are in v0.x of the package, we will follow the convention that updating from v0.x.y to v0.x.(y+1) (for example v0.1.15 to v0.1.16) should not break your code, unless you are using internal/undocumented features of the code, while updating from `v0.x.y` to `v0.(x+1).y` might break your code, though we will try to add deprecation warnings when possible, such as for simple cases where the name of a function changes.
-
-Note that as of Julia v1.5, in order to see deprecation warnings you will need to start Julia with `julia --depwarn=yes` (previously they were on by default). Please run your code like this before upgrading between minor versions of the code (for example from v0.1.41 to v0.2.0).
-
-After we release v1 of the package, we will start following [semantic versioning](https://semver.org).
-
-ITensorGLMakie v0.1.1 Release Notes
-===================================
-
-Bugs:
-
-Enhancements:
-
-- Update compats (#1031)
-
-ITensorGLMakie v0.1.0 Release Notes
-===================================
-
-Bugs:
-
-Enhancements:
-
-- Register ITensorGLMakie package, code in ITensors.jl repository
diff --git a/ITensorGLMakie/Project.toml b/ITensorGLMakie/Project.toml
deleted file mode 100644
index 2bd15e9a10..0000000000
--- a/ITensorGLMakie/Project.toml
+++ /dev/null
@@ -1,15 +0,0 @@
-name = "ITensorGLMakie"
-uuid = "3f718f31-6db8-4f43-a433-67cb5c73363e"
-authors = ["Matthew Fishman "]
-version = "0.1.2"
-
-[deps]
-GLMakie = "e9467ef8-e4e7-5192-8a1a-b1aee30e663a"
-ITensorMakie = "72ca75eb-df6f-4d6b-80c5-d5eab17be3f9"
-Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
-
-[compat]
-GLMakie = "0.9"
-ITensorMakie = "0.1.2"
-Reexport = "1.2.2"
-julia = "1.6"
diff --git a/ITensorGLMakie/examples/ex_2d_tensor_network_layered.jl b/ITensorGLMakie/examples/ex_2d_tensor_network_layered.jl
deleted file mode 100644
index 09ad91f093..0000000000
--- a/ITensorGLMakie/examples/ex_2d_tensor_network_layered.jl
+++ /dev/null
@@ -1,10 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Graphs
-using LayeredLayouts
-
-tn = itensornetwork(grid((4, 4)); linkspaces=3)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true layout = layout
-
-fig
diff --git a/ITensorGLMakie/examples/ex_contraction_sequence.jl b/ITensorGLMakie/examples/ex_contraction_sequence.jl
deleted file mode 100644
index 60835e3434..0000000000
--- a/ITensorGLMakie/examples/ex_contraction_sequence.jl
+++ /dev/null
@@ -1,20 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Graphs
-
-using ITensors.ContractionSequenceOptimization: optimal_contraction_sequence
-
-N = 5
-g = Graph(N)
-g_edges = [2 => 3, 1 => 4, 1 => 5, 4 => 5]
-for e in g_edges
- add_edge!(g, e)
-end
-
-A = itensornetwork(g; linkspaces=5)
-sequence = optimal_contraction_sequence(A)
-edge_labels = (; tags=true)
-R = @visualize_sequence fig ITensors.contract(A; sequence=sequence) edge_labels =
- edge_labels
-
-fig
diff --git a/ITensorGLMakie/examples/ex_dmrg.jl b/ITensorGLMakie/examples/ex_dmrg.jl
deleted file mode 100644
index 64be720c06..0000000000
--- a/ITensorGLMakie/examples/ex_dmrg.jl
+++ /dev/null
@@ -1,34 +0,0 @@
-using ITensors
-using ITensorGLMakie
-
-N = 10
-sites(n) = Index([QN("Sz", 0) => 1, QN("Sz", 1) => 1]; tags="S=1/2,Site,n=$n")
-l(n) = Index([QN("Sz", 0) => 10, QN("Sz", 1) => 10]; tags="Link,l=$n")
-h(n) = Index([QN("Sz", 0) => 5, QN("Sz", 1) => 5]; tags="ham,Link,l=$n")
-s⃗ = [sites(n) for n in 1:N]
-l⃗ = [l(n) for n in 1:(N - 1)]
-h⃗ = [h(n) for n in 1:(N - 1)]
-
-# Add some more indices between two of the tensors
-x = Index([QN("Sz", 0) => 2]; tags="X")
-y = Index([QN("Sz", 0) => 2]; tags="Y")
-
-n = 2
-ψn1n2 = randomITensor(l⃗[n - 1], s⃗[n], s⃗[n + 1], l⃗[n + 1], dag(x), dag(y))
-hn1 = randomITensor(dag(h⃗[n - 1]), s⃗[n]', dag(s⃗[n]), h⃗[n], x, y)
-hn2 = randomITensor(dag(h⃗[n]), s⃗[n + 1]', dag(s⃗[n + 1]), h⃗[n + 1])
-ELn0 = randomITensor(l⃗[n - 1]', h⃗[n - 1], dag(l⃗[n - 1]))
-ERn2 = randomITensor(l⃗[n + 1]', dag(h⃗[n + 1]), dag(l⃗[n + 1]))
-
-edge_labels = (; plevs=true)
-
-R = @visualize fig1 ELn0 * ψn1n2 * hn1 * hn2 * ERn2 edge_labels = edge_labels vertex_size =
- 50
-@show R ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
-
-# Split it up into multiple contractions
-R1 = @visualize fig2 ELn0 * ψn1n2 * hn1 edge_labels = edge_labels vertex_size = 50
-R2 = @visualize fig3 R1 * hn2 * ERn2 edge_labels = edge_labels vertex_size = 50
-@show R2 ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
-
-fig1, fig2, fig3
diff --git a/ITensorGLMakie/examples/ex_grid_layout.jl b/ITensorGLMakie/examples/ex_grid_layout.jl
deleted file mode 100644
index be01952885..0000000000
--- a/ITensorGLMakie/examples/ex_grid_layout.jl
+++ /dev/null
@@ -1,12 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Graphs
-using GeometryBasics
-using NetworkLayout
-
-N = 10
-g = grid((N,))
-tn = itensornetwork(g; linkspaces=10, sitespaces=2)
-@visualize fig tn siteinds_direction = Point(1, -0.5) layout = SquareGrid(; cols=1)
-
-fig
diff --git a/ITensorGLMakie/examples/ex_itensor_graph_makie.jl b/ITensorGLMakie/examples/ex_itensor_graph_makie.jl
deleted file mode 100644
index 671f161344..0000000000
--- a/ITensorGLMakie/examples/ex_itensor_graph_makie.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Graphs
-
-g = grid((5,))
-tn = itensornetwork(g; linkspaces=10, sitespaces=2)
-@visualize fig tn
-
-fig
diff --git a/ITensorGLMakie/examples/ex_qn_mps.jl b/ITensorGLMakie/examples/ex_qn_mps.jl
deleted file mode 100644
index f4c1b1ba80..0000000000
--- a/ITensorGLMakie/examples/ex_qn_mps.jl
+++ /dev/null
@@ -1,13 +0,0 @@
-using ITensors
-using ITensorGLMakie
-
-s = siteinds("S=1/2", 5; conserve_qns=true)
-ψ = randomMPS(s, n -> isodd(n) ? "↑" : "↓"; linkdims=2)
-orthogonalize!(ψ, 2)
-ψdag = prime(linkinds, dag(ψ))
-tn = [ψ..., ψdag...]
-
-edge_labels = (; plevs=true, qns=true)
-@visualize fig tn edge_labels = edge_labels edge_textsize = 20
-
-fig
diff --git a/ITensorGLMakie/examples/ex_quantum_circuit.jl b/ITensorGLMakie/examples/ex_quantum_circuit.jl
deleted file mode 100644
index 87fecf9875..0000000000
--- a/ITensorGLMakie/examples/ex_quantum_circuit.jl
+++ /dev/null
@@ -1,32 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using LayeredLayouts
-using Graphs
-
-N = 10
-layers = 10
-ndelete = 0
-
-s = siteinds("Qubit", N)
-layer(N, start) = [("CX", i, i + 1) for i in start:2:(N - 1)]
-layer(N) = append!(layer(N, 1), layer(N, 2))
-layer_N = layer(N)
-gates = []
-for _ in 1:layers
- append!(gates, layer_N)
-end
-
-for _ in 1:ndelete
- deleteat!(gates, rand(eachindex(gates)))
-end
-
-U, s̃ = circuit_network(gates, s)
-ψ = prod(MPS(s))
-ψ̃ = prod(MPS(s̃))
-tn = [ψ, U..., ψ̃]
-
-edge_labels = (; plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels layout = layout
-
-fig
diff --git a/ITensorGLMakie/examples/ex_visualize_3d.jl b/ITensorGLMakie/examples/ex_visualize_3d.jl
deleted file mode 100644
index a4e0bdf627..0000000000
--- a/ITensorGLMakie/examples/ex_visualize_3d.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Graphs
-
-tn = itensornetwork(grid((3, 3, 3)))
-edge_labels = (; dims=false)
-@visualize fig tn ndims = 3 edge_labels = edge_labels vertex_size = 400
-
-fig
diff --git a/ITensorGLMakie/examples/notest_ex_2d_circuit.jl b/ITensorGLMakie/examples/notest_ex_2d_circuit.jl
deleted file mode 100644
index 562cba6c48..0000000000
--- a/ITensorGLMakie/examples/notest_ex_2d_circuit.jl
+++ /dev/null
@@ -1,28 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Graphs
-using PastaQ: randomcircuit
-using LayeredLayouts
-
-Nx, Ny = 3, 3
-N = Nx * Ny
-# TODO: change to (Nx, Ny) with PastaQ v0.0.16
-gates = randomcircuit(
- Nx, Ny; depth=4, twoqubitgates="CX", onequbitgates="Rn", layered=false, rotated=false
-)
-
-s = siteinds("Qubit", N)
-
-U, s̃ = circuit_network(gates, s)
-ψ = MPS(s)
-ψ̃ = MPS(s̃)
-tn = [prod(ψ), U..., prod(ψ̃)]
-
-edge_labels = (; plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels layout = layout edge_textsize =
- 20
-@visualize! fig[2, 1] tn ndims = 3 arrow_show = true edge_labels = edge_labels edge_textsize =
- 10
-
-fig
diff --git a/ITensorGLMakie/examples/notest_ex_qft_circuit.jl b/ITensorGLMakie/examples/notest_ex_qft_circuit.jl
deleted file mode 100644
index a627acee60..0000000000
--- a/ITensorGLMakie/examples/notest_ex_qft_circuit.jl
+++ /dev/null
@@ -1,24 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Graphs
-using PastaQ: qft
-using LayeredLayouts
-
-N = 4
-gates = qft(N)
-
-s = siteinds("Qubit", N)
-
-U, s̃ = circuit_network(gates, s)
-ψ = MPS(s)
-ψ̃ = MPS(s̃)
-tn = [ψ..., U..., ψ̃...]
-
-edge_labels = (; tags=true, plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels edge_textsize = 20 layout =
- layout
-edge_labels = (; plevs=true)
-@visualize! fig[1, 2] tn ndims = 3 edge_labels = edge_labels edge_textsize = 20
-
-fig
diff --git a/ITensorGLMakie/src/ITensorGLMakie.jl b/ITensorGLMakie/src/ITensorGLMakie.jl
deleted file mode 100644
index 22b90ca9b6..0000000000
--- a/ITensorGLMakie/src/ITensorGLMakie.jl
+++ /dev/null
@@ -1,8 +0,0 @@
-module ITensorGLMakie
-
-using Reexport
-using GLMakie
-
-@reexport using ITensorMakie
-
-end # module
diff --git a/ITensorGLMakie/test/Project.toml b/ITensorGLMakie/test/Project.toml
deleted file mode 100644
index 8bc48ec53b..0000000000
--- a/ITensorGLMakie/test/Project.toml
+++ /dev/null
@@ -1,11 +0,0 @@
-[deps]
-GeometryBasics = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
-Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
-ITensorGLMakie = "3f718f31-6db8-4f43-a433-67cb5c73363e"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LayeredLayouts = "f4a74d36-062a-4d48-97cd-1356bad1de4e"
-NDTensors = "23ae76d9-e61a-49c4-8f12-3f1a16adf9cf"
-NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
-Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
-ReferenceTests = "324d217c-45ce-50fc-942e-d289b448e8cf"
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
diff --git a/ITensorGLMakie/test/references/R.png b/ITensorGLMakie/test/references/R.png
deleted file mode 100644
index f5ad7660d3..0000000000
Binary files a/ITensorGLMakie/test/references/R.png and /dev/null differ
diff --git a/ITensorGLMakie/test/references/R.txt b/ITensorGLMakie/test/references/R.txt
deleted file mode 100644
index bf88cd6f00..0000000000
--- a/ITensorGLMakie/test/references/R.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ERn2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⡇⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀20⠀⠀⠀⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀20⠊⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀10⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣉hn2⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠜⠀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤2⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠀⢀⠔⢹⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ψn1n2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⠀2⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠔⠁⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢄⡀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢈⣑hn1⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⢀⣀⣀⣀⣀⡠⠤⠤10⠤⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀ELn0⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorGLMakie/test/references/R1.png b/ITensorGLMakie/test/references/R1.png
deleted file mode 100644
index 354701a82f..0000000000
Binary files a/ITensorGLMakie/test/references/R1.png and /dev/null differ
diff --git a/ITensorGLMakie/test/references/R1.txt b/ITensorGLMakie/test/references/R1.txt
deleted file mode 100644
index 4ac7548305..0000000000
--- a/ITensorGLMakie/test/references/R1.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀ELn0⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⣷⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒⠒⠤⠤10⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀20⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤hn1⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠁⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀20⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀2⊗10⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ψn1n2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorGLMakie/test/references/R2.png b/ITensorGLMakie/test/references/R2.png
deleted file mode 100644
index e88d9af380..0000000000
Binary files a/ITensorGLMakie/test/references/R2.png and /dev/null differ
diff --git a/ITensorGLMakie/test/references/R2.txt b/ITensorGLMakie/test/references/R2.txt
deleted file mode 100644
index 30dabf0db8..0000000000
--- a/ITensorGLMakie/test/references/R2.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀T1⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⣷⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒⠒⠤⠤20⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀20⊗2⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤T3⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠁⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀2⊗10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀20⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀T2⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorGLMakie/test/references/T.png b/ITensorGLMakie/test/references/T.png
deleted file mode 100644
index 2b0604227a..0000000000
Binary files a/ITensorGLMakie/test/references/T.png and /dev/null differ
diff --git a/ITensorGLMakie/test/references/grid.png b/ITensorGLMakie/test/references/grid.png
deleted file mode 100644
index 2e52dbfd91..0000000000
Binary files a/ITensorGLMakie/test/references/grid.png and /dev/null differ
diff --git a/ITensorGLMakie/test/references/tn.png b/ITensorGLMakie/test/references/tn.png
deleted file mode 100644
index 276cc6b9df..0000000000
Binary files a/ITensorGLMakie/test/references/tn.png and /dev/null differ
diff --git a/ITensorGLMakie/test/references/tn.txt b/ITensorGLMakie/test/references/tn.txt
deleted file mode 100644
index 863ad1c718..0000000000
--- a/ITensorGLMakie/test/references/tn.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀tn₅⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⡇⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀20⠀⠀⠀⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀20⠊⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀10⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣉tn₄⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠜⠀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤2⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠀⢀⠔⢹⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀tn₂⣉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⠀2⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠔⠁⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢄⡀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢈⣑tn₃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⢀⣀⣀⣀⣀⡠⠤⠤10⠤⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀tn₁⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorGLMakie/test/runtests.jl b/ITensorGLMakie/test/runtests.jl
deleted file mode 100644
index 61e7704fca..0000000000
--- a/ITensorGLMakie/test/runtests.jl
+++ /dev/null
@@ -1,14 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using Test
-
-starts_and_ends_with(file, st, en) = startswith(file, st) && endswith(file, en)
-starts_and_ends_with(st, en) = file -> starts_and_ends_with(file, st, en)
-
-test_path = joinpath(@__DIR__)
-test_files = filter(starts_and_ends_with("test_", ".jl"), readdir(test_path))
-@testset "ITensorGLMakie.jl" for file in test_files
- file_path = joinpath(test_path, file)
- println("Running test $(file_path)")
- include(file_path)
-end
diff --git a/ITensorGLMakie/test/test_basics.jl b/ITensorGLMakie/test/test_basics.jl
deleted file mode 100644
index 36f639b895..0000000000
--- a/ITensorGLMakie/test/test_basics.jl
+++ /dev/null
@@ -1,60 +0,0 @@
-using ITensors
-using ITensorGLMakie
-using ReferenceTests
-using Test
-
-@testset "Basic test for ITensorGLMakie" begin
- extension = "png"
-
- N = 10
- s(n) = Index([QN("Sz", 0) => 1, QN("Sz", 1) => 1]; tags="S=1/2,Site,n=$n")
- l(n) = Index([QN("Sz", 0) => 10, QN("Sz", 1) => 10]; tags="Link,l=$n")
- h(n) = Index([QN("Sz", 0) => 5, QN("Sz", 1) => 5]; tags="ham,Link,l=$n")
- s⃗ = [s(n) for n in 1:N]
- l⃗ = [l(n) for n in 1:(N - 1)]
- h⃗ = [h(n) for n in 1:(N - 1)]
-
- # Add some more indices between two of the tensors
- x = Index([QN("Sz", 0) => 2]; tags="X")
- y = Index([QN("Sz", 0) => 2]; tags="Y")
-
- n = 2
- ψn1n2 = randomITensor(l⃗[n - 1], s⃗[n], s⃗[n + 1], l⃗[n + 1], dag(x), dag(y))
- hn1 = randomITensor(dag(h⃗[n - 1]), s⃗[n]', dag(s⃗[n]), h⃗[n], x, y)
- hn2 = randomITensor(dag(h⃗[n]), s⃗[n + 1]', dag(s⃗[n + 1]), h⃗[n + 1])
- ELn0 = randomITensor(l⃗[n - 1]', h⃗[n - 1], dag(l⃗[n - 1]))
- ERn2 = randomITensor(l⃗[n + 1]', dag(h⃗[n + 1]), dag(l⃗[n + 1]))
-
- tn = [ELn0, ψn1n2, hn1, hn2, ERn2]
-
- R = @visualize figR ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R1 = @visualize figR1 ELn0 * ψn1n2 * hn1
- R2 = @visualize figR2 R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
- T = @visualize figT ELn0
-
- @test R ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- @test R1 ≈ ELn0 * ψn1n2 * hn1
- @test R2 ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- @test T == ELn0
-
- fig_tn = @visualize_noeval tn
-
- by = extension == "png" ? psnr_equality(0.5) : isequal
-
- @test_reference "references/R.$extension" figR by = by
- @test_reference "references/R1.$extension" figR1 by = by
- @test_reference "references/R2.$extension" figR2 by = by
- @test_reference "references/tn.$extension" fig_tn by = by
- @test_reference "references/T.$extension" figT by = by
-
- R = @visualize fig_grid ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R1 = @visualize! fig_grid[1, 2] ELn0 * ψn1n2 * hn1
- R2 = @visualize! fig_grid[2, 1] R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
- @visualize_noeval! fig_grid[2, 2] tn
-
- # XXX: Broken, passes locally but fails on CI with:
- # Warning: test fails because PSNR -0.6602330207824707 < 1
- #@test_reference "references/grid.$extension" fig_grid by=by
-
- @test_throws DimensionMismatch @visualize fig R1 * hn2 * ERn2 vertex_labels = ["T1", "T2"]
-end
diff --git a/ITensorGLMakie/test/test_examples.jl b/ITensorGLMakie/test/test_examples.jl
deleted file mode 100644
index 8335f08bdc..0000000000
--- a/ITensorGLMakie/test/test_examples.jl
+++ /dev/null
@@ -1,14 +0,0 @@
-using Test
-
-@testset "Examples" begin
- examples_path = joinpath(@__DIR__, "..", "examples")
- example_files = filter(starts_and_ends_with("ex_", ".jl"), readdir(examples_path))
- for file in example_files
- file_path = joinpath(examples_path, file)
- println("Testing file $(file_path)")
- empty!(ARGS)
- push!(ARGS, "false")
- @test !isnothing(include(file_path))
- empty!(ARGS)
- end
-end
diff --git a/ITensorGPU/.JuliaFormatter.toml b/ITensorGPU/.JuliaFormatter.toml
deleted file mode 100644
index 08f664cdb9..0000000000
--- a/ITensorGPU/.JuliaFormatter.toml
+++ /dev/null
@@ -1,2 +0,0 @@
-style = "blue"
-indent = 2
diff --git a/ITensorGPU/.gitignore b/ITensorGPU/.gitignore
deleted file mode 100644
index aa25970241..0000000000
--- a/ITensorGPU/.gitignore
+++ /dev/null
@@ -1,7 +0,0 @@
-*.jl.*.cov
-.*.swp
-*.jl.cov
-*.jl.mem
-.DS_Store
-/Manifest.toml
-/dev/
diff --git a/ITensorGPU/.gitlab-ci.yml b/ITensorGPU/.gitlab-ci.yml
deleted file mode 100644
index 6886680085..0000000000
--- a/ITensorGPU/.gitlab-ci.yml
+++ /dev/null
@@ -1,50 +0,0 @@
-include:
- - 'https://raw.githubusercontent.com/JuliaGPU/gitlab-ci/master/templates/v6.yml'
-
-test:1.4:
- extends:
- - .julia:1.4
- - .test
- tags:
- - nvidia
- - cuda_11.0
- script:
- - julia --project -e 'using Pkg; Pkg.develop(["ITensors", "CUDA", "GPUArrays", "GPUCompiler"]);'
- - julia --project -e 'using Pkg; Pkg.test("ITensorGPU"; coverage=true)'
- variables:
- JULIA_CUDA_VERSION: '11.0'
- JULIA_CUDA_USE_BINARYBUILDER: 'true'
-
-test:1.5:
- extends:
- - .julia:1.5
- - .test
- tags:
- - nvidia
- - cuda_11.0
- script:
- - julia --project -e 'using Pkg; Pkg.develop(["ITensors", "CUDA", "GPUArrays", "GPUCompiler"]);'
- - julia --project -e 'using Pkg; Pkg.test("ITensorGPU"; coverage=true)'
- variables:
- JULIA_CUDA_VERSION: '11.0'
- JULIA_CUDA_USE_BINARYBUILDER: 'true'
-
-test:nightly:
- extends:
- - .julia:nightly
- - .test
- tags:
- - nvidia
- - cuda_11.0
- script:
- - julia --project -e 'using Pkg; Pkg.develop(["ITensors", "CUDA", "GPUArrays", "GPUCompiler"]);'
- - julia --project -e 'using Pkg; Pkg.test("ITensorGPU")'
- allow_failure: true
- variables:
- JULIA_CUDA_VERSION: '11.0'
- JULIA_CUDA_USE_BINARYBUILDER: 'true'
-
-coverage:
- extends:
- - .julia:nightly
- - .coverage
diff --git a/ITensorGPU/LICENSE b/ITensorGPU/LICENSE
deleted file mode 100644
index 1385d3954e..0000000000
--- a/ITensorGPU/LICENSE
+++ /dev/null
@@ -1,19 +0,0 @@
-Copyright (c) 2019 Katharine Hyatt
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/ITensorGPU/NEWS.md b/ITensorGPU/NEWS.md
deleted file mode 100644
index 3c6eddd974..0000000000
--- a/ITensorGPU/NEWS.md
+++ /dev/null
@@ -1,71 +0,0 @@
-This file is a (mostly) comprehensive list of changes made in each release of ITensorGPU.jl. For a completely comprehensive but more verbose list, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main/ITensorGPU).
-
-While we are in v0.x of the package, we will follow the convention that updating from v0.x.y to v0.x.(y+1) (for example v0.1.15 to v0.1.16) should not break your code, unless you are using internal/undocumented features of the code, while updating from `v0.x.y` to `v0.(x+1).y` might break your code, though we will try to add deprecation warnings when possible, such as for simple cases where the name of a function changes.
-
-Note that as of Julia v1.5, in order to see deprecation warnings you will need to start Julia with `julia --depwarn=yes` (previously they were on by default). Please run your code like this before upgrading between minor versions of the code (for example from v0.1.41 to v0.2.0).
-
-After we release v1 of the package, we will start following [semantic versioning](https://semver.org).
-
-ITensorGPU v0.0.7 Release Notes
-===============================
-
-Bugs:
-
-Enhancements:
-
-- Bump version compat for dependencies.
-
-ITensorGPU v0.0.6 Release Notes
-===============================
-
-Bugs:
-
-Enhancements:
-
-ITensorGPU v0.0.5 Release Notes
-===============================
-
-Bugs:
-
-Enhancements:
-
-- Clean up `outer` and add GEMM routing for CUDA (#887)
-
-ITensorGPU v0.0.4 Release Notes
-===============================
-
-Bugs:
-
-Enhancements:
-
-- `cu([[A, B], [C]])` -> `[[cu(A), cu(B)], [cu(C)]]` and same for cpu (#898).
-- Allow cutruncate to work for Float32s (#897).
-
-ITensorGPU v0.0.3 Release Notes
-===============================
-
-Bugs:
-
-- Fix bugs in complex SVD on GPU (with and without truncations) (#871)
-
-Enhancements:
-
-- Remove some unnecessary contract code (#860)
-
-ITensorGPU v0.0.2 Release Notes
-===============================
-
-Bugs:
-
-- Remove unnecessary `CuDense` type equality definition (#823)
-
-Enhancements:
-
-ITensorGPU v0.0.1 Release Notes
-===============================
-
-Bugs:
-
-Enhancements:
-
-- Register ITensorGPU package, code in ITensors.jl repository
diff --git a/ITensorGPU/Project.toml b/ITensorGPU/Project.toml
deleted file mode 100644
index 762e51ee4d..0000000000
--- a/ITensorGPU/Project.toml
+++ /dev/null
@@ -1,33 +0,0 @@
-name = "ITensorGPU"
-uuid = "d89171c1-af8f-46b3-badf-d2a472317c15"
-authors = ["Katharine Hyatt", "Matthew Fishman "]
-version = "0.1.7"
-
-[deps]
-Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
-CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
-Combinatorics = "861a8166-3701-5b0c-9a16-15d98fcdc6aa"
-Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
-NDTensors = "23ae76d9-e61a-49c4-8f12-3f1a16adf9cf"
-Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
-SimpleTraits = "699a6c99-e7fa-54fc-8d76-47d257e15c1d"
-StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
-Strided = "5e0ebb24-38b0-5f93-81fe-25c709ecae67"
-TimerOutputs = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
-cuTENSOR = "011b41b2-24ef-40a8-b3eb-fa098493e9e1"
-
-[compat]
-Adapt = "3.5, 4"
-CUDA = "4.0"
-Combinatorics = "1.0.2"
-Functors = "0.2, 0.3, 0.4"
-ITensors = "= 0.3.37"
-NDTensors = "0.1.50"
-SimpleTraits = "0.9.4"
-StaticArrays = "1.2.13"
-Strided = "1.1.2, 2"
-TimerOutputs = "0.5.13"
-cuTENSOR = "1.1.0"
-julia = "1.6 - 1.9"
diff --git a/ITensorGPU/README.md b/ITensorGPU/README.md
deleted file mode 100644
index 40238bef34..0000000000
--- a/ITensorGPU/README.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# ITensorGPU: Intelligent Tensors with GPU acceleration
-
-[![codecov](https://codecov.io/gh/ITensor/ITensorGPU.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/ITensor/ITensorGPU.jl)
-
-[![gitlab-ci](https://gitlab.com/JuliaGPU/ITensorGPU-jl/badges/master/pipeline.svg)](https://gitlab.com/JuliaGPU/ITensorGPU-jl/commits/master)
-
-This package extends the functionality of [ITensors.jl](https://github.com/ITensor/ITensors.jl) to make use of CUDA-enabled GPUs to accelerate tensor contractions and factorizations. It sits on top of the wonderful [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) package and uses NVIDIA's [cuTENSOR](https://developer.nvidia.com/cutensor) library for high-performance tensor operations.
-
-## Installing ITensorGPU.jl
-
-Dependencies:
- - [Julia 1.3 or later](https://julialang.org/downloads/)
- - [CUDA 10.1 or later](https://developer.nvidia.com/cuda-downloads) -- Currently only NVIDIA GPUs are supported. NVIDIA drivers are required so that Julia can make use of the NVIDIA GPU on your system.
- - [cuTENSOR v1.0.0 or later](https://developer.nvidia.com/cutensor) -- A specialized library for perfoming permutation-free tensor contractions on the GPU. `libcutensor.so` needs to be in your `LD_LIBRARY_PATH` so that `CUDA.jl` will be able to find it.
- - [ITensors.jl](https://itensor.github.io/ITensors.jl/stable/#Installation-1)
-
-To properly install CUDA with Julia, it may be helpful to first follow the [CUDA.jl installation instructions](https://juliagpu.github.io/CUDA.jl/stable/installation/overview/) and test that you have that installed properly and that it is able to use `cuTENSOR`. You can run the commands:
-```julia
-julia> using CUDA.CUTENSOR
-
-julia> CUTENSOR.has_cutensor()
-true
-
-julia> CUTENSOR.version()
-v"1.2.1"
-```
-to check that `CUDA.jl` can see the version of `cuTENSOR` you have installed.
-
-Once you have all of the dependencies installed, you can then go ahead and install `ITensorGPU.jl` with the following command:
-```
-julia> ]
-
-pkg> add ITensorGPU
-```
-
-To check if this has all worked, you can run the package tests using:
-```julia
-julia> ]
-
-pkg> test ITensorGPU
-```
-
-## Examples
-
-Take a look at the `examples/` directory for examples of running ITensor calculations on the GPU.
-
-For an application of `ITensorGPU.jl` to more sophisticated tensor network calculations, take a look at [PEPS.jl](https://github.com/ITensor/PEPS.jl).
-
-For some background on the development and design of this package, you can take a look at [this blog post](https://kshyatt.github.io/post/itensorsgpu/) by Katie Hyatt, original author of the `ITensorGPU.jl` package.
-
diff --git a/ITensorGPU/examples/dmrg.jl b/ITensorGPU/examples/dmrg.jl
deleted file mode 100644
index e3448ccace..0000000000
--- a/ITensorGPU/examples/dmrg.jl
+++ /dev/null
@@ -1,24 +0,0 @@
-using ITensors
-using ITensorGPU
-
-# Set to identity to run on CPU
-device = cu
-
-N = 50
-sites = siteinds("S=1", N)
-
-opsum = OpSum()
-for j in 1:(N - 1)
- opsum .+= 0.5, "S+", j, "S-", j + 1
- opsum .+= 0.5, "S-", j, "S+", j + 1
- opsum .+= "Sz", j, "Sz", j + 1
-end
-H = device(MPO(opsum, sites))
-
-ψ₀ = device(randomMPS(sites))
-
-dmrg_kwargs = (;
- nsweeps=6, maxdim=[10, 20, 40, 100], mindim=[1, 10], cutoff=1e-11, noise=1e-10
-)
-energy, ψ = @time dmrg(H, ψ₀; dmrg_kwargs...)
-@show energy
diff --git a/ITensorGPU/examples/gate_evolution.jl b/ITensorGPU/examples/gate_evolution.jl
deleted file mode 100644
index d570039915..0000000000
--- a/ITensorGPU/examples/gate_evolution.jl
+++ /dev/null
@@ -1,46 +0,0 @@
-using ITensors
-using ITensorGPU
-
-import ITensors: space, state, op
-
-space(::SiteType"Qubit") = 2
-state(::SiteType"Qubit", ::StateName"0") = 1
-state(::SiteType"Qubit", ::StateName"1") = 2
-
-op_matrix(s::String) = op_matrix(OpName(s))
-
-op_matrix(::OpName"Id") = [
- 1 0
- 0 1
-]
-
-op_matrix(::OpName"I") = op_matrix("Id")
-
-op_matrix(::OpName"X") = [
- 0 1
- 1 0
-]
-
-function op_matrix(on::OpName, s::Index...; kwargs...)
- rs = reverse(s)
- return itensor(op_matrix(on; kwargs...), prime.(rs)..., dag.(rs)...)
-end
-
-op_matrix(gn::String, s::Index...; kwargs...) = op_matrix(OpName(gn), s...; kwargs...)
-
-function op_matrix(gn::String, s::Vector{<:Index}, ns::Int...; kwargs...)
- return op_matrix(OpName(gn), s[[ns...]]...; kwargs...)
-end
-
-op(gn::OpName, ::SiteType"Qubit", s::Index...; kwargs...) = op_matrix(gn, s...; kwargs...)
-
-N = 10
-s = siteinds("Qubit", N)
-X = cu.(ops(s, [("X", n) for n in 1:N]))
-
-initstate = fill("0", N)
-
-ψ0 = productCuMPS(s, initstate)
-
-gates = [X[n] for n in 1:2:N]
-ψ = apply(gates, ψ0)
diff --git a/ITensorGPU/src/ITensorGPU.jl b/ITensorGPU/src/ITensorGPU.jl
deleted file mode 100644
index 93be4c9342..0000000000
--- a/ITensorGPU/src/ITensorGPU.jl
+++ /dev/null
@@ -1,114 +0,0 @@
-module ITensorGPU
-using NDTensors
-
-using Adapt
-using CUDA
-using CUDA.CUBLAS
-using CUDA.CUSOLVER
-using Functors
-using ITensors
-using LinearAlgebra
-using Random
-using SimpleTraits
-using StaticArrays
-using Strided
-using TimerOutputs
-using cuTENSOR
-
-using NDTensors: setdata, setstorage, cpu, IsWrappedArray, parenttype
-
-import Adapt: adapt_structure
-import Base: *, permutedims!
-import CUDA: CuArray, CuMatrix, CuVector, cu
-import CUDA.Mem: pin
-import ITensors:
- randn!,
- compute_contraction_labels,
- eigen,
- tensor,
- scale!,
- unioninds,
- array,
- matrix,
- vector,
- polar,
- tensors,
- truncate!,
- leftlim,
- rightlim,
- permute,
- BroadcastStyle,
- Indices
-import NDTensors:
- Atrans,
- Btrans,
- CombinerTensor,
- ContractionProperties,
- Combiner,
- Ctrans,
- Diag,
- DiagTensor,
- Dense,
- DenseTensor,
- NonuniformDiag,
- NonuniformDiagTensor,
- Tensor,
- UniformDiag,
- UniformDiagTensor,
- _contract!!,
- _contract!,
- _contract_scalar!,
- _contract_scalar_noperm!,
- can_contract,
- compute_contraction_properties!,
- contract!!,
- contract!,
- contract,
- contraction_output,
- contraction_output_type,
- data,
- getperm,
- ind,
- is_trivial_permutation,
- outer!,
- outer!!,
- permutedims!!,
- set_eltype,
- set_ndims,
- similartype,
- zero_contraction_output
-import cuTENSOR: cutensorContractionPlan_t, cutensorAlgo_t
-
-#const ContractionPlans = Dict{String, Tuple{cutensorAlgo_t, cutensorContractionPlan_t}}()
-const ContractionPlans = Dict{String,cutensorAlgo_t}()
-
-include("cuarray/set_types.jl")
-include("traits.jl")
-include("adapt.jl")
-include("tensor/cudense.jl")
-include("tensor/dense.jl")
-include("tensor/culinearalgebra.jl")
-include("tensor/cutruncate.jl")
-include("tensor/cucombiner.jl")
-include("tensor/cudiag.jl")
-include("cuitensor.jl")
-include("mps/cumps.jl")
-
-export cu,
- cpu, cuITensor, randomCuITensor, cuMPS, randomCuMPS, productCuMPS, randomCuMPO, cuMPO
-
-## TODO: Is this needed?
-## const devs = Ref{Vector{CUDAdrv.CuDevice}}()
-## const dev_rows = Ref{Int}(0)
-## const dev_cols = Ref{Int}(0)
-## function __init__()
-## voltas = filter(dev->occursin("V100", CUDAdrv.name(dev)), collect(CUDAdrv.devices()))
-## pascals = filter(dev->occursin("P100", CUDAdrv.name(dev)), collect(CUDAdrv.devices()))
-## devs[] = voltas[1:1]
-## #devs[] = pascals[1:2]
-## CUBLASMG.cublasMgDeviceSelect(CUBLASMG.mg_handle(), length(devs[]), devs[])
-## dev_rows[] = 1
-## dev_cols[] = 1
-## end
-
-end #module
diff --git a/ITensorGPU/src/adapt.jl b/ITensorGPU/src/adapt.jl
deleted file mode 100644
index fe40c14435..0000000000
--- a/ITensorGPU/src/adapt.jl
+++ /dev/null
@@ -1,33 +0,0 @@
-#
-# Used to adapt `EmptyStorage` types
-#
-
-function NDTensors.cu(eltype::Type{<:Number}, x)
- return fmap(x -> adapt(CuVector{eltype,default_buffertype()}, x), x)
-end
-NDTensors.cu(x) = fmap(x -> adapt(CuArray, x), x)
-
-function NDTensors.set_eltype_if_unspecified(
- arraytype::Type{CuVector{T}}, eltype::Type
-) where {T}
- return arraytype
-end
-function NDTensors.set_eltype_if_unspecified(arraytype::Type{CuVector}, eltype::Type)
- return CuVector{eltype}
-end
-
-# Overload `CUDA.cu` for convenience
-const ITensorType = Union{
- TensorStorage,Tensor,ITensor,Array{ITensor},Array{<:Array{ITensor}},MPS,MPO
-}
-CUDA.cu(x::ITensorType) = NDTensors.cu(x)
-CUDA.cu(eltype::Type{<:Number}, x::ITensorType) = NDTensors.cu(eltype, x)
-
-function NDTensors.adapt_storagetype(
- to::Type{<:CUDA.CuArray}, x::Type{<:NDTensors.EmptyStorage}
-)
- store = NDTensors.storagetype(x)
- return NDTensors.emptytype(
- NDTensors.set_datatype(store, CuVector{eltype(store),default_buffertype()})
- )
-end
diff --git a/ITensorGPU/src/cuarray/set_types.jl b/ITensorGPU/src/cuarray/set_types.jl
deleted file mode 100644
index 21beda4ab8..0000000000
--- a/ITensorGPU/src/cuarray/set_types.jl
+++ /dev/null
@@ -1,17 +0,0 @@
-buffertype(datatype::Type{<:CuArray{<:Any,<:Any,B}}) where {B} = B
-function buffertype(datatype::Type{<:CuArray})
- println(
- "CuArray definitions require a CUDA.Mem buffer try $(datatype{default_buffertype()})"
- )
- throw(TypeError)
-end
-
-default_buffertype() = CUDA.Mem.DeviceBuffer
-
-function set_eltype(arraytype::Type{<:CuArray}, eltype::Type)
- return CuArray{eltype,ndims(arraytype),buffertype(arraytype)}
-end
-
-function set_ndims(arraytype::Type{<:CuArray}, ndims)
- return CuArray{eltype(arraytype),ndims,buffertype(arraytype)}
-end
diff --git a/ITensorGPU/src/cuitensor.jl b/ITensorGPU/src/cuitensor.jl
deleted file mode 100644
index 667aa28eb2..0000000000
--- a/ITensorGPU/src/cuitensor.jl
+++ /dev/null
@@ -1,127 +0,0 @@
-import ITensors.NDTensors: NeverAlias, AliasStyle, AllowAlias
-import ITensors: ITensor
-import CUDA: CuArray
-
-function cuITensor(eltype::Type{<:Number}, inds::IndexSet)
- return itensor(
- NDTensors.default_storagetype(CuVector{eltype,default_buffertype()})(dim(inds)), inds
- )
-end
-cuITensor(::Type{T}, inds::Index...) where {T<:Number} = cuITensor(T, IndexSet(inds...))
-
-cuITensor(is::IndexSet) = cuITensor(Float64, is)
-cuITensor(inds::Index...) = cuITensor(IndexSet(inds...))
-
-cuITensor() = ITensor()
-function cuITensor(x::S, inds::IndexSet{N}) where {S<:Number,N}
- d = NDTensors.Dense{S,CuVector{S,default_buffertype()}}(
- fill!(CuVector{S,default_buffertype()}(undef, dim(inds)), x)
- )
- return ITensor(d, inds)
-end
-cuITensor(x::S, inds::Index...) where {S<:Number} = cuITensor(x, IndexSet(inds...))
-
-function ITensor(
- as::AliasStyle,
- eltype::Type{<:Number},
- A::CuArray{<:Number},
- inds::Indices{Index{Int}};
- kwargs...,
-)
- length(A) ≠ dim(inds) && throw(
- DimensionMismatch(
- "In ITensor(::CuArray, inds), length of AbstractArray ($(length(A))) must match total dimension of IndexSet ($(dim(inds)))",
- ),
- )
- data = CuArray{eltype}(as, A)
- return itensor(Dense(data), inds)
-end
-
-# Fix ambiguity error
-function ITensor(
- as::NDTensors.AliasStyle, eltype::Type{<:Number}, A::CuArray{<:Number}, inds::Tuple{}
-)
- length(A) ≠ dim(inds) && throw(
- DimensionMismatch(
- "In ITensor(::CuArray, inds), length of AbstractArray ($(length(A))) must match total dimension of IndexSet ($(dim(inds)))",
- ),
- )
- data = CuArray{eltype}(as, A)
- return itensor(Dense(data), inds)
-end
-
-# Helper functions for different view behaviors
-CuArray{ElT,N}(::NeverAlias, A::AbstractArray) where {ElT,N} = CuArray{ElT,N}(A)
-function CuArray{ElT,N}(::AllowAlias, A::AbstractArray) where {ElT,N}
- return convert(CuArray{ElT,N}, A)
-end
-function CuArray{ElT}(as::AliasStyle, A::AbstractArray{ElTA,N}) where {ElT,N,ElTA}
- return CuArray{ElT,N}(as, A)
-end
-
-# TODO: Change to:
-# (Array{ElT, N} where {ElT})([...]) = [...]
-# once support for `VERSION < v"1.6"` is dropped.
-# Previous to Julia v1.6 `where` syntax couldn't be used in a function name
-function CuArray{<:Any,N}(as::AliasStyle, A::AbstractArray{ElTA,N}) where {N,ElTA}
- return CuArray{ElTA,N}(as, A)
-end
-
-cuITensor(data::Array, inds...) = cu(ITensor(data, inds...))
-
-cuITensor(data::CuArray, inds...) = ITensor(data, inds...)
-
-cuITensor(A::ITensor) = cu(A)
-
-function randomCuITensor(::Type{S}, inds::Indices) where {S<:Real}
- T = cuITensor(S, inds)
- randn!(T)
- return T
-end
-function randomCuITensor(::Type{S}, inds::Indices) where {S<:Complex}
- Tr = cuITensor(real(S), inds)
- Ti = cuITensor(real(S), inds)
- randn!(Tr)
- randn!(Ti)
- return complex(Tr) + im * Ti
-end
-function randomCuITensor(::Type{S}, inds::Index...) where {S<:Number}
- return randomCuITensor(S, IndexSet(inds...))
-end
-randomCuITensor(inds::IndexSet) = randomCuITensor(Float64, inds)
-randomCuITensor(inds::Index...) = randomCuITensor(Float64, IndexSet(inds...))
-
-CuArray(T::ITensor) = CuArray(tensor(T))
-
-function CuArray{ElT,N}(T::ITensor, is::Vararg{Index,N}) where {ElT,N}
- ndims(T) != N && throw(
- DimensionMismatch(
- "cannot convert an $(ndims(T)) dimensional ITensor to an $N-dimensional CuArray."
- ),
- )
- TT = tensor(permute(T, is...; allow_alias=true))
- return CuArray{ElT,N}(TT)::CuArray{ElT,N}
-end
-
-function CuArray{ElT}(T::ITensor, is::Vararg{Index,N}) where {ElT,N}
- return CuArray{ElT,N}(T, is...)
-end
-
-function CuArray(T::ITensor, is::Vararg{Index,N}) where {N}
- return CuArray{eltype(T),N}(T, is...)::CuArray{<:Number,N}
-end
-
-CUDA.CuMatrix(A::ITensor) = CuArray(A)
-
-function CuVector(A::ITensor)
- if ndims(A) != 1
- throw(DimensionMismatch("Vector() expected a 1-index ITensor"))
- end
- return CuArray(A)
-end
-
-function CuMatrix(T::ITensor, i1::Index, i2::Index)
- ndims(T) != 2 &&
- throw(DimensionMismatch("ITensor must be order 2 to convert to a Matrix"))
- return CuArray(T, i1, i2)
-end
diff --git a/ITensorGPU/src/mps/cumps.jl b/ITensorGPU/src/mps/cumps.jl
deleted file mode 100644
index f59099f8d8..0000000000
--- a/ITensorGPU/src/mps/cumps.jl
+++ /dev/null
@@ -1,13 +0,0 @@
-# cu(ψ::Union{MPS,MPO}) = map(cu, ψ)
-# cpu(ψ::Union{MPS,MPO}) = map(cpu, ψ)
-
-cuMPS(ψ::MPS) = cu(ψ)
-cuMPS(args...; kwargs...) = cu(MPS(args...; kwargs...))
-randomCuMPS(args...; kwargs...) = cu(randomMPS(args...; kwargs...))
-
-# For backwards compatibility
-productCuMPS(args...; kwargs...) = cuMPS(args...; kwargs...)
-
-cuMPO(M::MPO) = cu(M)
-cuMPO(args...; kwargs...) = cu(MPO(args...; kwargs...))
-randomCuMPO(args...; kwargs...) = cu(randomMPO(args...; kwargs...))
diff --git a/ITensorGPU/src/tensor/cucombiner.jl b/ITensorGPU/src/tensor/cucombiner.jl
deleted file mode 100644
index d01109a892..0000000000
--- a/ITensorGPU/src/tensor/cucombiner.jl
+++ /dev/null
@@ -1 +0,0 @@
-Base.promote_rule(::Type{<:Combiner}, StorageT::Type{<:CuDense}) = StorageT
diff --git a/ITensorGPU/src/tensor/cudense.jl b/ITensorGPU/src/tensor/cudense.jl
deleted file mode 100644
index 2db9055d9c..0000000000
--- a/ITensorGPU/src/tensor/cudense.jl
+++ /dev/null
@@ -1,573 +0,0 @@
-using LinearAlgebra: BlasFloat
-
-const CuDense{ElT,VecT} = Dense{ElT,VecT} where {VecT<:CuVector}
-const CuDenseTensor{ElT,N,StoreT,IndsT} = Tensor{ElT,N,StoreT,IndsT} where {StoreT<:CuDense}
-
-# function Dense{T,SA}(x::Dense{T,SB}) where {T<:Number,SA<:CuArray,SB<:Array}
-# return Dense{T,SA}(CuArray(x))
-# end
-# function Dense{T,SA}(x::Dense{T,SB}) where {T<:Number,SA<:Array,SB<:CuArray}
-# return Dense{T,SA}(collect(x.data))
-# end
-# Dense{T,S}(size::Integer) where {T,S<:CuArray{<:T}} = Dense{T,S}(CUDA.zeros(T, size))
-# function Dense{T,S}(x::T, size::Integer) where {T,S<:CuArray{<:T}}
-# arr = CuArray{T}(undef, size)
-# fill!(arr, x)
-# return Dense{T,S}(arr)
-# end
-
-function Base.complex(::Type{Dense{ElT,VT}}) where {ElT,VT<:CuArray}
- return Dense{complex(ElT),CuVector{complex(ElT)}}
-end
-
-CuArray(x::CuDense{ElT}) where {ElT} = CuVector{ElT}(data(x))
-function CuArray{ElT,N}(x::CuDenseTensor{ElT,N}) where {ElT,N}
- return CuArray{ElT,N}(reshape(data(store(x)), dims(inds(x))...))
-end
-CuArray(x::CuDenseTensor{ElT,N}) where {ElT,N} = CuArray{ElT,N}(x)
-
-*(D::Dense{T,AT}, x::S) where {T,AT<:CuArray,S<:Number} = Dense(x .* data(D))
-
-Base.getindex(D::CuDense{<:Number}) = collect(data(D))[]
-Base.getindex(D::CuDenseTensor{<:Number,0}) = store(D)[]
-LinearAlgebra.norm(T::CuDenseTensor) = norm(data(store(T)))
-
-function Base.copyto!(R::CuDenseTensor{<:Number,N}, T::CuDenseTensor{<:Number,N}) where {N}
- RA = array(R)
- TA = array(T)
- RA .= TA
- return R
-end
-
-# This is for type promotion for Scalar*Dense
-function Base.promote_rule(
- ::Type{<:Dense{ElT1,CuVector{ElT1}}}, ::Type{ElT2}
-) where {ElT1,ElT2<:Number}
- ElR = promote_type(ElT1, ElT2)
- VecR = CuVector{ElR}
- return Dense{ElR,VecR}
-end
-
-function permutedims!!(
- B::Tensor{ElT,N,StoreT,IndsB},
- A::Tensor{ElT,N,StoreT,IndsA},
- perm::NTuple{N,Int},
- f::Function=(r, t) -> permute!(r, t),
-) where {N,ElT,IndsB,IndsA,StoreT<:CuDense{ElT}}
- Ais = inds(A)
- Bis = ITensors.NDTensors.permute(inds(A), perm)
- B = f(B, A)
- return B
-end
-
-import ITensors.NDTensors: GemmBackend, auto_select_backend, _gemm!
-function backend_cutensor()
- return gemm_backend[] = :CUTENSOR
-end
-function backend_cublas()
- return gemm_backend[] = :CUBLAS
-end
-
-@inline function auto_select_backend(
- ::Type{<:CuArray{<:BlasFloat}},
- ::Type{<:CuArray{<:BlasFloat}},
- ::Type{<:CuArray{<:BlasFloat}},
-)
- return GemmBackend(:CUBLAS)
-end
-
-@inline function auto_select_backend(
- ::Type{<:CuArray{<:BlasFloat}}, ::Type{<:CuArray{<:BlasFloat}}, ::Type{<:AbstractVecOrMat}
-)
- return GemmBackend(:GenericCUDA)
-end
-
-# CUBLAS matmul
-function _gemm!(
- ::GemmBackend{:CUBLAS},
- tA,
- tB,
- alpha,
- A::AbstractVecOrMat,
- B::AbstractVecOrMat,
- beta,
- C::AbstractVecOrMat,
-)
- return CUBLAS.gemm!(tA, tB, alpha, A, B, beta, C)
-end
-
-# CUDA generic matmul
-function _gemm!(
- ::GemmBackend{:GenericCUDA},
- tA,
- tB,
- alpha,
- A::AbstractVecOrMat,
- B::AbstractVecOrMat,
- beta,
- C::CuDenseTensor,
-)
- C_dat = reshape(data(store(C)), size(C))
- A_ = tA == 'T' ? transpose(A) : A
- B_ = tB == 'T' ? transpose(B) : B
- C_dat = mul!(C_dat, A_, B_, alpha, beta)
- copyto!(data(store(C)), C_dat)
- return C
-end
-
-function _contract_scalar!(
- R::CuDenseTensor{ElR,NR},
- labelsR,
- T₁::CuDenseTensor,
- labelsT₁,
- T₂::CuDenseTensor,
- labelsT₂,
- α=one(ElR),
- β=zero(ElR),
-) where {ElR,NR}
- if nnz(T₁) == nnz(T₂) == 1
- new_R = Tensor(Dense(data(store(T₁)) .* data(store(T₂))), inds(R))
- copyto!(store(R), store(new_R))
- elseif nnz(T₁) == 1
- props = ContractionProperties(labelsT₁, labelsT₂, labelsR)
- compute_contraction_properties!(props, T₁, T₂, R)
- R = _contract!(R, T₁, T₂, props, α, β)
- elseif nnz(T₂) == 1
- props = ContractionProperties(labelsT₁, labelsT₂, labelsR)
- compute_contraction_properties!(props, T₁, T₂, R)
- R = _contract!(R, T₁, T₂, props, α, β)
- else
- error("In _contract_scalar!, one tensor must be a scalar")
- end
- return R
-end
-
-function _gemm_contract!(
- CT::DenseTensor{El,NC},
- AT::DenseTensor{El,NA},
- BT::DenseTensor{El,NB},
- props::ContractionProperties,
- α::Number=one(El),
- β::Number=zero(El),
-) where {El,NC,NA,NB}
- # TODO: directly use Tensor instead of Array
- C = array(CT)
- A = array(AT)
- B = array(BT)
-
- tA = 'N'
- if props.permuteA
- pA = NTuple{NA,Int}(props.PA)
- Ap = permutedims(A, pA)
- AM = reshape(Ap, props.dmid, props.dleft)
- tA = 'T'
- else
- #A doesn't have to be permuted
- if Atrans(props)
- AM = reshape(A, props.dmid, props.dleft)
- tA = 'T'
- else
- AM = reshape(A, props.dleft, props.dmid)
- end
- end
-
- tB = 'N'
- if props.permuteB
- pB = NTuple{NB,Int}(props.PB)
- Bp = permutedims(B, pB)
- BM = reshape(Bp, props.dmid, props.dright)
- else
- if Btrans(props)
- BM = reshape(B, props.dright, props.dmid)
- tB = 'T'
- else
- BM = reshape(B, props.dmid, props.dright)
- end
- end
-
- #TODO: this logic may be wrong
- if props.permuteC
- #Need to copy here since we will be permuting
- #into C later
- CM = reshape(copy(C), props.dleft, props.dright)
- else
- if Ctrans(props)
- CM = reshape(C, props.dleft, props.dright)
- (AM, BM) = (BM, AM)
- if tA == tB
- tA = tB = (tA == 'T' ? 'N' : 'T')
- end
- else
- CM = reshape(C, props.dleft, props.dright)
- end
- end
-
- CM = CUBLAS.gemm!(tA, tB, El(α), AM, BM, El(β), CM)
-
- if props.permuteC
- pC = NTuple{NC,Int}(props.PC)
- Cr = reshape(CM, props.newCrange...)
- @strided C .= permutedims(Cr, pC)
- end
- return C
-end
-
-function _contract!(
- CT::CuDenseTensor{El,NC},
- AT::CuDenseTensor{El,NA},
- BT::CuDenseTensor{El,NB},
- props::ContractionProperties,
- α::Number=one(El),
- β::Number=zero(El),
-) where {El,NC,NA,NB}
- if ndims(CT) > 12 || ndims(BT) > 12 || ndims(AT) > 12
- return _gemm_contract!(CT, AT, BT, props, α, β)
- end
- Ainds = inds(AT)
- Adims = dims(Ainds)
- Binds = inds(BT)
- Bdims = dims(Binds)
- Cinds = inds(CT)
- Cdims = dims(Cinds)
- Adata = reshape(data(store(AT)), Adims...)
- Bdata = reshape(data(store(BT)), Bdims...)
- Cdata = reshape(data(store(CT)), Cdims...)
- contracted = commoninds(Ainds, Binds)
- A_only = uniqueinds(Ainds, Binds)
- B_only = uniqueinds(Binds, Ainds)
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(contracted)
- push!(ind_dict, i)
- end
- if length(A_only) > 0
- for (idx, i) in enumerate(A_only)
- push!(ind_dict, i)
- end
- end
- if length(B_only) > 0
- for (idx, i) in enumerate(B_only)
- push!(ind_dict, i)
- end
- end
- ctainds = zeros(Int, length(Ainds))
- ctbinds = zeros(Int, length(Binds))
- ctcinds = zeros(Int, length(Cinds))
- for (ii, ia) in enumerate(Ainds)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Binds)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
- for (ii, ic) in enumerate(Cinds)
- ctcinds[ii] = findfirst(x -> x == ic, ind_dict)
- end
- id_op = cuTENSOR.CUTENSOR_OP_IDENTITY
- dict_key = ""
- for cc in zip(ctcinds, Cdims)
- dict_key *= string(cc[1]) * "," * string(cc[2]) * ","
- end
- for aa in zip(ctainds, Adims)
- dict_key *= string(aa[1]) * "," * string(aa[2]) * ","
- end
- for bb in zip(ctbinds, Bdims)
- dict_key *= string(bb[1]) * "," * string(bb[2]) * ","
- end
- if haskey(ENV, "CUTENSOR_AUTOTUNE") && tryparse(Int, ENV["CUTENSOR_AUTOTUNE"]) == 1
- if haskey(ContractionPlans, dict_key)
- dict_val = ContractionPlans[dict_key]
- algo = dict_val
- #plan = dict_val[2]
- Cdata = cuTENSOR.contraction!(
- α,
- Adata,
- Vector{Char}(ctainds),
- id_op,
- Bdata,
- Vector{Char}(ctbinds),
- id_op,
- β,
- Cdata,
- Vector{Char}(ctcinds),
- id_op,
- id_op;
- algo=algo,
- )
- else
- # loop through all algos
- # pick the fastest one
- # store that plan!
- best_time = 1e6
- best_plan = nothing
- best_algo = nothing
- max_algos = Ref{Int32}(C_NULL)
- cuTENSOR.cutensorContractionMaxAlgos(max_algos)
- # fix once the other options are documented
- #algos = collect(Cint(cuTENSOR.CUTENSOR_ALGO_GETT):Cint(max_algos[] - 1))
- algos = collect(Cint(cuTENSOR.CUTENSOR_ALGO_GETT):Cint(-1))
- for algo in reverse(algos)
- try
- Cdata, this_time, bytes, gctime, memallocs = @timed cuTENSOR.contraction!(
- α,
- Adata,
- Vector{Char}(ctainds),
- id_op,
- Bdata,
- Vector{Char}(ctbinds),
- id_op,
- β,
- Cdata,
- Vector{Char}(ctcinds),
- id_op,
- id_op;
- algo=cuTENSOR.cutensorAlgo_t(algo),
- )
- if this_time < best_time
- best_time = this_time
- #best_plan = this_plan
- best_algo = cuTENSOR.cutensorAlgo_t(algo)
- end
- catch err
- @warn "Algorithm $algo not supported"
- end
- end
- ContractionPlans[dict_key] = best_algo
- end
- else
- Cdata = cuTENSOR.contraction!(
- α,
- Adata,
- Vector{Char}(ctainds),
- id_op,
- Bdata,
- Vector{Char}(ctbinds),
- id_op,
- β,
- Cdata,
- Vector{Char}(ctcinds),
- id_op,
- id_op,
- )
- end
- return parent(Cdata)
-end
-
-function Base.:+(B::CuDenseTensor, A::CuDenseTensor)
- opC = cuTENSOR.CUTENSOR_OP_IDENTITY
- opA = cuTENSOR.CUTENSOR_OP_IDENTITY
- opAC = cuTENSOR.CUTENSOR_OP_ADD
- Ais = inds(A)
- Bis = inds(B)
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(inds(A))
- push!(ind_dict, i)
- end
- Adata = data(store(A))
- Bdata = data(store(B))
- reshapeBdata = reshape(Bdata, dims(Bis)...)
- reshapeAdata = reshape(Adata, dims(Ais)...)
- ctainds = zeros(Int, length(Ais))
- ctbinds = zeros(Int, length(Bis))
- for (ii, ia) in enumerate(Ais)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Bis)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
- ctcinds = copy(ctbinds)
- C = CUDA.zeros(eltype(Bdata), dims(Bis)...)
- cuTENSOR.elementwiseBinary!(
- one(eltype(Adata)),
- reshapeAdata,
- ctainds,
- opA,
- one(eltype(Bdata)),
- reshapeBdata,
- ctbinds,
- opC,
- C,
- ctcinds,
- opAC,
- )
- copyto!(data(store(B)), vec(C))
- return B
-end
-
-function Base.:+(B::CuDense, Bis::IndexSet, A::CuDense, Ais::IndexSet)
- opA = cuTENSOR.CUTENSOR_OP_IDENTITY
- opC = cuTENSOR.CUTENSOR_OP_IDENTITY
- opAC = cuTENSOR.CUTENSOR_OP_ADD
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(Ais)
- push!(ind_dict, i)
- end
- Adata = data(A)
- Bdata = data(B)
- reshapeBdata = reshape(Bdata, dims(Bis)...)
- reshapeAdata = reshape(Adata, dims(Ais)...)
- ctainds = zeros(Int, length(Ais))
- ctbinds = zeros(Int, length(Bis))
- for (ii, ia) in enumerate(Ais)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Bis)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
- ctcinds = copy(ctbinds)
- C = CUDA.zeros(eltype(Bdata), dims(Bis)...)
- Cis = Bis
- C = cuTENSOR.elementwiseBinary!(
- 1, reshapeAdata, ctainds, opA, 1, reshapeBdata, ctbinds, opC, C, ctcinds, opAC
- )
- copyto!(data(B), vec(C))
- return C
-end
-
-function Base.:-(B::CuDenseTensor, A::CuDenseTensor)
- opC = cuTENSOR.CUTENSOR_OP_IDENTITY
- opA = cuTENSOR.CUTENSOR_OP_IDENTITY
- opAC = cuTENSOR.CUTENSOR_OP_ADD
- Ais = inds(A)
- Bis = inds(B)
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(inds(A))
- push!(ind_dict, i)
- end
- Adata = data(store(A))
- Bdata = data(store(B))
- reshapeBdata = reshape(Bdata, dims(Bis)...)
- reshapeAdata = reshape(Adata, dims(Ais)...)
- ctainds = zeros(Int, length(Ais))
- ctbinds = zeros(Int, length(Bis))
- for (ii, ia) in enumerate(Ais)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Bis)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
- ctcinds = copy(ctbinds)
- C = CUDA.zeros(eltype(Bdata), dims(Bis))
- cuTENSOR.elementwiseBinary!(
- -one(eltype(Adata)),
- reshapeAdata,
- ctainds,
- opA,
- one(eltype(Bdata)),
- reshapeBdata,
- ctbinds,
- opC,
- C,
- ctcinds,
- opAC,
- )
- copyto!(data(store(B)), vec(C))
- return B
-end
-
-function Base.:-(A::CuDense, Ais::IndexSet, B::CuDense, Bis::IndexSet)
- opA = cuTENSOR.CUTENSOR_OP_IDENTITY
- opC = cuTENSOR.CUTENSOR_OP_IDENTITY
- opAC = cuTENSOR.CUTENSOR_OP_ADD
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(Ais)
- push!(ind_dict, i)
- end
- Adata = data(A)
- Bdata = data(B)
- reshapeBdata = reshape(Bdata, dims(Bis)...)
- reshapeAdata = reshape(Adata, dims(Ais)...)
- ctainds = zeros(Int, length(Ais))
- ctbinds = zeros(Int, length(Bis))
- for (ii, ia) in enumerate(Ais)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Bis)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
- ctcinds = copy(ctbinds)
- C = CUDA.zeros(eltype(Bdata), dims(Bis)...)
- Cis = Bis
- C = cuTENSOR.elementwiseBinary!(
- one(eltype(Adata)),
- reshapeAdata,
- ctainds,
- opA,
- -one(eltype(Bdata)),
- reshapeBdata,
- ctbinds,
- opC,
- C,
- ctcinds,
- opAC,
- )
- copyto!(data(B), vec(C))
- return C
-end
-
-function Base.permute!(B::CuDenseTensor, A::CuDenseTensor)
- Ais = inds(A)
- Bis = inds(B)
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(Ais)
- push!(ind_dict, i)
- end
- Adata = data(store(A))
- Bdata = data(store(B))
- reshapeBdata = reshape(Bdata, dims(Bis)...)
- reshapeAdata = reshape(Adata, dims(Ais)...)
- if ndims(A) < 40 # use CUTENSOR
- ctainds = zeros(Int, length(Ais))
- ctbinds = zeros(Int, length(Bis))
- for (ii, ia) in enumerate(Ais)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Bis)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
- cuTENSOR.permutation!(
- one(eltype(Adata)),
- reshapeAdata,
- Vector{Char}(ctainds),
- reshapeBdata,
- Vector{Char}(ctbinds),
- )
- else # use GPUArrays
- perm = Int[]
- for aix in Ais
- b_pos = findfirst(bix -> bix == aix, Bis)
- push!(perm, b_pos)
- end
- @assert isperm(perm)
- permutedims!(reshapeBdata, reshapeAdata, invperm(perm))
- end
- return Tensor(Dense(vec(reshapeBdata)), inds(B))
-end
-
-function Base.permute!(B::CuDense, Bis::IndexSet, A::CuDense, Ais::IndexSet)
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(Ais)
- push!(ind_dict, i)
- end
- Adata = data(A)
- Bdata = data(B)
- reshapeBdata = reshape(Bdata, dims(Bis)...)
- reshapeAdata = reshape(Adata, dims(Ais)...)
- ctainds = zeros(Int, length(Ais))
- ctbinds = zeros(Int, length(Bis))
- for (ii, ia) in enumerate(Ais)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Bis)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
-
- cuTENSOR.permutation!(
- one(eltype(Adata)),
- reshapeAdata,
- Vector{Char}(ctainds),
- reshapeBdata,
- Vector{Char}(ctbinds),
- )
- return Tensor(Dense(reshapeBdata), Tuple(Bis))
-end
-
-Base.:/(A::CuDenseTensor, x::Number) = A * inv(x)
diff --git a/ITensorGPU/src/tensor/cudiag.jl b/ITensorGPU/src/tensor/cudiag.jl
deleted file mode 100644
index 0e27bebd67..0000000000
--- a/ITensorGPU/src/tensor/cudiag.jl
+++ /dev/null
@@ -1,228 +0,0 @@
-export CuDiag
-
-const CuDiag{ElT,VecT} = Diag{ElT,VecT} where {VecT<:CuArray{ElT}}
-const NonuniformCuDiagTensor{ElT,N,StoreT,IndsT} =
- Tensor{ElT,N,StoreT,IndsT} where {StoreT<:CuDiag}
-
-CuArray(D::NonuniformCuDiagTensor) = CuArray(dense(D))
-
-function NDTensors.dense(T::NonuniformCuDiagTensor{ElT}) where {ElT}
- R_data = CUDA.zeros(ElT, dim(inds(T)))
- diag_inds = diagind(reshape(R_data, dims(inds(T))), 0)
- R_data[diag_inds] .= data(store(T))
- return Tensor(Dense(R_data), inds(T))
-end
-
-function Base.promote_rule(
- ::Type{Diag{ElT2,VecT2}}, ::Type{CuDense}
-) where {ElT2,VecT2<:Number}
- return promote_type(DenseT1, ElT2)
-end
-
-function Base.promote_rule(
- ::Type{<:Tensor{ElT1,N1,StoreT1}}, ::Type{<:Tensor{ElT2,N2,StoreT2}}
-) where {ElT1,ElT2,N1,N2,StoreT1<:CuDense,StoreT2<:UniformDiag}
- ElT3 = promote_type(ElT1, ElT2)
- return Tensor{promote_type(ElT1, ElT2),N3,CuDense{ElT3,CuVector{ElT3}}} where {N3}
-end
-function Base.promote_rule(
- ::Type{<:Tensor{ElT1,N1,StoreT1}}, ::Type{<:Tensor{ElT2,N2,StoreT2}}
-) where {ElT1,ElT2,N1,N2,StoreT1<:UniformDiag,StoreT2<:CuDense}
- ElT3 = promote_type(ElT1, ElT2)
- return Tensor{promote_type(ElT1, ElT2),N3,CuDense{ElT3,CuVector{ElT3}}} where {N3}
-end
-
-function Base.promote_rule(
- ::Type{<:UniformDiag{ElT1}}, ::Type{<:NonuniformDiag{ElT2,VecT2}}
-) where {ElT1,ElT2,VecT2<:CuArray}
- ElT3 = promote_type(ElT1, ElT2)
- return NonuniformDiag{ElT3,CuVector{ElT3}}
-end
-function Base.promote_rule(
- ::Type{<:NonuniformDiag{ElT2,VecT2}}, ::Type{<:UniformDiag{ElT1}}
-) where {ElT1,ElT2,VecT2<:CuArray}
- ElT3 = promote_type(ElT1, ElT2)
- return NonuniformDiag{ElT3,CuVector{ElT3}}
-end
-function Base.promote_rule(
- ::Type{DenseT1}, ::Type{Diag{ElT2,VecT2}}
-) where {DenseT1<:CuDense,ElT2,VecT2<:Number}
- return promote_type(DenseT1, ElT2)
-end
-
-function contraction_output_type(
- TensorT1::Type{<:NonuniformCuDiagTensor}, TensorT2::Type{<:CuDenseTensor}, IndsR
-)
- return similartype(promote_type(TensorT1, TensorT2), IndsR)
-end
-function contraction_output_type(
- TensorT1::Type{<:CuDenseTensor}, TensorT2::Type{<:NonuniformCuDiagTensor}, IndsR
-)
- return contraction_output_type(TensorT2, TensorT1, IndsR)
-end
-
-function contraction_output_type(
- TensorT1::Type{<:DiagTensor{<:Number,<:CuDiag}},
- TensorT2::Type{<:DiagTensor{<:Number,<:CuDiag}},
- IndsR::Type,
-)
- return similartype(promote_type(TensorT1, TensorT2), IndsR)
-end
-
-function contraction_output_type(
- TensorT1::Type{<:UniformDiagTensor},
- TensorT2::Type{<:DiagTensor{<:Number,<:CuDiag}},
- IndsR::Type,
-)
- return similartype(promote_type(TensorT1, TensorT2), IndsR)
-end
-function contraction_output_type(
- TensorT1::Type{<:DiagTensor{<:Number,<:CuDiag}},
- TensorT2::Type{<:UniformDiagTensor},
- IndsR::Type,
-)
- return contraction_output_type(TensorT2, TensorT1, IndsR)
-end
-
-function zero_contraction_output(
- T1::UniformDiagTensor{ElT1,N1}, T2::CuDenseTensor{ElT2,N2}, indsR::IndsR
-) where {ElT1,N1,ElT2,N2,IndsR}
- ElT3 = promote_type(ElT1, ElT2)
- dat = CUDA.zeros(ElT3, dim(indsR))
- return Tensor(Dense(dat), indsR)
-end
-function zero_contraction_output(
- T2::CuDenseTensor{ElT2,N2}, T1::UniformDiagTensor{ElT1,N1}, indsR::IndsR
-) where {ElT1,N1,ElT2,N2,IndsR}
- return zero_contraction_output(T1, T2, indsR)
-end
-
-function zero_contraction_output(
- T1::TensorT1, T2::TensorT2, indsR
-) where {TensorT1<:NonuniformDiagTensor,TensorT2<:NonuniformCuDiagTensor}
- ElT3 = promote_type(eltype(TensorT1), eltype(TensorT2))
- dat = CUDA.zeros(ElT3, length(data(store(T2))))
- return Tensor(Diag(dat), indsR)
-end
-
-function zero_contraction_output(
- T1::TensorT1, T2::TensorT2, indsR
-) where {TensorT1<:UniformDiagTensor,TensorT2<:NonuniformCuDiagTensor}
- ElT3 = promote_type(eltype(TensorT1), eltype(TensorT2))
- dat = CUDA.zeros(ElT3, length(data(store(T2))))
- return Tensor(Diag(dat), indsR)
-end
-function zero_contraction_output(
- T1::TensorT1, T2::TensorT2, indsR
-) where {TensorT2<:UniformDiagTensor,TensorT1<:NonuniformCuDiagTensor}
- return zero_contraction_output(T2, T1, indsR)
-end
-
-function zero_contraction_output(
- T1::TensorT1, T2::TensorT2, indsR
-) where {TensorT1<:DiagTensor,TensorT2<:CuDenseTensor}
- ElT3 = promote_type(eltype(TensorT1), eltype(TensorT2))
- dat = CUDA.zeros(ElT3, length(data(store(T2))))
- return Tensor(Dense(dat), indsR)
-end
-function zero_contraction_output(
- T1::TensorT1, T2::TensorT2, indsR
-) where {TensorT2<:DiagTensor,TensorT1<:CuDenseTensor}
- return zero_contraction_output(T2, T1, indsR)
-end
-
-function contraction_output(
- T1::UniformDiagTensor, T2::DiagTensor{Elt2,<:CuDiag}, indsR
-) where {Elt2}
- return zero_contraction_output(T1, T2, indsR)
-end
-function contraction_output(
- T1::DiagTensor{Elt1,<:CuDiag}, T2::UniformDiagTensor, indsR
-) where {Elt1}
- return contraction_output(T2, T1, indsR)
-end
-
-function contract!(
- C::CuDenseTensor{<:Number,NC},
- Clabels,
- A::UniformDiagTensor{<:Number,NA},
- Alabels,
- B::CuDenseTensor{<:Number,NB},
- Blabels,
-) where {NC,NA,NB}
- Bstore = data(store(B))
- Astore = data(store(A))
- Cstore = data(store(C))
- return copyto!(Cstore, Astore .* Bstore)
-end
-
-function contract!(
- C::CuDenseTensor{<:Number,NC},
- Clabels,
- A::CuDenseTensor{<:Number,NA},
- Alabels,
- B::UniformDiagTensor{<:Number,NB},
- Blabels,
-) where {NC,NA,NB}
- return contract!(C, Clabels, B, Blabels, A, Alabels)
-end
-
-function contract!(
- C::NonuniformCuDiagTensor{EltC,NC,IndsC},
- Clabels,
- A::UniformDiagTensor{EltA,NA,IndsA},
- Alabels,
- B::NonuniformCuDiagTensor{EltB,NB,IndsB},
- Blabels,
-) where {EltC<:Number,EltB<:Number,EltA<:Number,NC,NB,NA,IndsA,IndsB,IndsC}
- Bstore = data(store(B))
- Astore = data(store(A))
- Cstore = data(store(C))
- return copyto!(Cstore, Astore .* Bstore)
-end
-
-function contract!(
- C::NonuniformCuDiagTensor{EltC,NC,IndsC},
- Clabels,
- B::NonuniformCuDiagTensor{EltB,NB,IndsB},
- Blabels,
- A::UniformDiagTensor{EltA,NA,IndsA},
- Alabels,
-) where {EltC<:Number,EltB<:Number,EltA<:Number,NC,NB,NA,IndsA,IndsB,IndsC}
- Bstore = data(store(B))
- Astore = data(store(A))
- Cstore = data(store(C))
- return copyto!(Cstore, Astore .* Bstore)
-end
-
-function contract!(
- C::NonuniformCuDiagTensor{EltC,NC,IndsC},
- Clabels,
- B::NonuniformCuDiagTensor{EltB,NB,IndsB},
- Blabels,
- A::NonuniformCuDiagTensor{EltA,NA,IndsA},
- Alabels,
-) where {EltC<:Number,EltB<:Number,EltA<:Number,NC,NB,NA,IndsA,IndsB,IndsC}
- Bstore = data(store(B))
- Astore = data(store(A))
- Cstore = data(store(C))
- return copyto!(Cstore, Astore .* Bstore)
-end
-
-# Dense * NonuniformCuDiag
-function contract!(
- C::CuDenseTensor, Clabels, A::NonuniformCuDiagTensor, Alabels, B::CuDenseTensor, Blabels
-)
- Astore = data(store(A))
- newAstore = CUDA.zeros(eltype(A), dims(inds(A))[1], dims(inds(A))[2])
- adi = diagind(newAstore, 0)
- newAstore[adi] .= Astore[:]
- newA = Tensor(Dense(vec(newAstore)), inds(A))
- return contract!(C, Clabels, newA, Alabels, B, Blabels)
-end
-
-function contract!(
- C::CuDenseTensor, Clabels, A::CuDenseTensor, Alabels, B::NonuniformCuDiagTensor, Blabels
-)
- return contract!(C, Clabels, B, Blabels, A, Alabels)
-end
diff --git a/ITensorGPU/src/tensor/culinearalgebra.jl b/ITensorGPU/src/tensor/culinearalgebra.jl
deleted file mode 100644
index c64a3b06d3..0000000000
--- a/ITensorGPU/src/tensor/culinearalgebra.jl
+++ /dev/null
@@ -1,159 +0,0 @@
-
-#
-# Linear Algebra of order 2 Tensors
-#
-# Even though CuDenseTensor{_,2} is strided
-# and passable to BLAS/LAPACK, it cannot
-# be made <: StridedArray
-
-function Base.:*(
- T1::Tensor{ElT1,2,StoreT1,IndsT1}, T2::Tensor{ElT2,2,StoreT2,IndsT2}
-) where {ElT1,StoreT1<:CuDense,IndsT1,ElT2,StoreT2<:CuDense,IndsT2}
- RM = matrix(T1) * matrix(T2)
- indsR = IndsT1(ind(T1, 1), ind(T2, 2))
- pT = promote_type(ElT1, ElT2)
- return tensor(Dense(vec(RM)), indsR)
-end
-#= FIX ME
-function LinearAlgebra.exp(T::CuDenseTensor{ElT,2}) where {ElT,IndsT}
- expTM = exp(matrix(T))
- return tensor(Dense(vec(expTM)),inds(T))
-end
-
-function expHermitian(T::CuDenseTensor{ElT,2}) where {ElT,IndsT}
- # exp(::Hermitian/Symmetric) returns Hermitian/Symmetric,
- # so extract the parent matrix
- expTM = parent(exp(Hermitian(matrix(T))))
- return tensor(Dense(vec(expTM)),inds(T))
-end
-=#
-
-# svd of an order-2 tensor
-function LinearAlgebra.svd(T::CuDenseTensor{ElT,2,IndsT}; kwargs...) where {ElT,IndsT}
- maxdim::Int = get(kwargs, :maxdim, minimum(dims(T)))
- mindim::Int = get(kwargs, :mindim, 1)
- cutoff::Float64 = get(kwargs, :cutoff, 0.0)
- absoluteCutoff::Bool = get(kwargs, :absoluteCutoff, false)
- doRelCutoff::Bool = get(kwargs, :doRelCutoff, true)
- fastSVD::Bool = get(kwargs, :fastSVD, false)
- # Safer to use `Array`, which ensures
- # no views/aliases are made, since
- # we are using in-place `CUSOLVER.svd!` below.
- aT = Array(T)
- @timeit "CUSOLVER svd" begin
- MU, MS, MV = CUSOLVER.svd!(aT)
- end
- if !(MV isa CuMatrix)
- # Materialize any array wrappers,
- # for now, since `Adjoint` wrappers
- # seem to cause issues throughout
- # CUDA.jl, for example with slicing,
- # reshaping and then copying, etc.
- # TODO: Fix this in a more robust way.
- MV = copy(MV)
- end
- # for consistency with cpu version,
- # ITensors.jl/NDTensors/src/linearalgebra.jl/svd
- # need conj!(MV)
- conj!(MV)
- P = MS .^ 2
- truncerr, docut, P = truncate!(
- P;
- mindim=mindim,
- maxdim=maxdim,
- cutoff=cutoff,
- absoluteCutoff=absoluteCutoff,
- doRelCutoff=doRelCutoff,
- )
- spec = Spectrum(P, truncerr)
- dS = length(P)
- if dS < length(MS)
- MU = MU[:, 1:dS]
- MS = MS[1:dS]
- MV = MV[:, 1:dS]
- end
-
- # Make the new indices to go onto U and V
- u = eltype(IndsT)(dS)
- v = eltype(IndsT)(dS)
- Uinds = IndsT((ind(T, 1), u))
- Sinds = IndsT((u, v))
- Vinds = IndsT((ind(T, 2), v))
- U = tensor(Dense(vec(MU)), Uinds)
- Sdata = CUDA.zeros(ElT, dS * dS)
- dsi = diagind(reshape(Sdata, dS, dS), 0)
- Sdata[dsi] = MS
- S = tensor(Dense(Sdata), Sinds)
- V = tensor(Dense(vec(MV)), Vinds)
- return U, S, V, spec
-end
-
-function LinearAlgebra.eigen(
- T::Hermitian{ElT,<:CuDenseTensor{ElT,2,IndsT}}; kwargs...
-) where {ElT<:Union{Real,Complex},IndsT}
- ispossemidef::Bool = get(kwargs, :ispossemidef, false)
- maxdim::Int = get(kwargs, :maxdim, minimum(dims(T)))
- mindim::Int = get(kwargs, :mindim, 1)
- cutoff::Float64 = get(kwargs, :cutoff, 0.0)
- absoluteCutoff::Bool = get(kwargs, :absoluteCutoff, false)
- doRelCutoff::Bool = get(kwargs, :doRelCutoff, true)
- @timeit "CUSOLVER eigen" begin
- local DM, UM
- if ElT <: Complex
- DM, UM = CUSOLVER.heevd!('V', 'U', matrix(parent(T)))
- else
- DM, UM = CUSOLVER.syevd!('V', 'U', matrix(parent(T)))
- end
- end
- DM_ = reverse(DM)
- @timeit "truncate" begin
- truncerr, docut, DM = truncate!(
- DM_;
- maxdim=maxdim,
- cutoff=cutoff,
- absoluteCutoff=absoluteCutoff,
- doRelCutoff=doRelCutoff,
- )
- end
- spec = Spectrum(DM, truncerr)
- dD = length(DM)
- dV = reverse(UM; dims=2)
- if dD < size(dV, 2)
- dV = CuMatrix(dV[:, 1:dD])
- end
- # Make the new indices to go onto U and V
- l = eltype(IndsT)(dD)
- r = eltype(IndsT)(dD)
- Vinds = IndsT((dag(ind(T, 2)), dag(r)))
- Dinds = IndsT((l, dag(r)))
- U = tensor(Dense(vec(dV)), Vinds)
- D = tensor(Diag(real.(DM)), Dinds)
- return D, U, spec
-end
-
-function LinearAlgebra.qr(T::CuDenseTensor{ElT,2,IndsT}; kwargs...) where {ElT,IndsT}
- QM, RM = qr(matrix(T))
- # Make the new indices to go onto Q and R
- q, r = inds(T)
- q = dim(q) < dim(r) ? sim(q) : sim(r)
- Qinds = IndsT((ind(T, 1), q))
- Rinds = IndsT((q, ind(T, 2)))
- QM = CuMatrix(QM)
- Q = tensor(Dense(vec(QM)), Qinds)
- R = tensor(Dense(vec(RM)), Rinds)
- return Q, R
-end
-
-function polar(T::CuDenseTensor{ElT,2,IndsT}) where {ElT,IndsT}
- QM, RM = polar(matrix(T))
- dim = size(QM, 2)
- # Make the new indices to go onto Q and R
- q = eltype(IndsT)(dim)
- # TODO: use push/pushfirst instead of a constructor
- # call here
- Qinds = IndsT((ind(T, 1), q))
- Rinds = IndsT((q, ind(T, 2)))
- Q = tensor(Dense(vec(QM)), Qinds)
- R = tensor(Dense(vec(RM)), Rinds)
- return Q, R
-end
diff --git a/ITensorGPU/src/tensor/cutruncate.jl b/ITensorGPU/src/tensor/cutruncate.jl
deleted file mode 100644
index 108bc1e0be..0000000000
--- a/ITensorGPU/src/tensor/cutruncate.jl
+++ /dev/null
@@ -1,87 +0,0 @@
-import LinearAlgebra: BlasReal
-
-function truncate!(P::CuVector{T}; kwargs...)::Tuple{T,T,CuVector{T}} where {T<:BlasReal}
- maxdim::Int = min(get(kwargs, :maxdim, length(P)), length(P))
- mindim::Int = min(get(kwargs, :mindim, 1), maxdim)
- cutoff::Float64 = get(kwargs, :cutoff, 0.0)
- absoluteCutoff::Bool = get(kwargs, :absoluteCutoff, false)
- doRelCutoff::Bool = get(kwargs, :doRelCutoff, true)
- origm = length(P)
- docut = zero(T)
-
- maxP = maximum(P)
- if maxP == zero(T)
- P = CUDA.zeros(T, 1)
- return zero(T), zero(T), P
- end
- if origm == 1
- docut = maxP / 2
- return zero(T), docut, P[1:1]
- end
- @timeit "setup rP" begin
- #Zero out any negative weight
- #neg_z_f = (!signbit(x) ? x : 0.0)
- rP = map(x -> !signbit(x) ? Float64(x) : 0.0, P)
- n = origm
- end
- @timeit "handle cutoff" begin
- if absoluteCutoff
- #Test if individual prob. weights fall below cutoff
- #rather than using *sum* of discarded weights
- sub_arr = rP .- Float64(cutoff)
- err_rP = sub_arr ./ abs.(sub_arr)
- flags = reinterpret(Float64, (signbit.(err_rP) .<< 1 .& 2) .<< 61)
- cut_ind = CUDA.CUBLAS.iamax(length(err_rP), err_rP .* flags) - 1
- if cut_ind > 0
- n = min(maxdim, cut_ind)
- n = max(n, mindim)
- else
- n = maxdim
- end
- truncerr = T(sum(rP[(n + 1):end]))
- else
- truncerr = zero(T)
- scale = one(T)
- @timeit "find scale" begin
- if doRelCutoff
- scale = sum(P)
- scale = scale > zero(T) ? scale : one(T)
- end
- end
- #Truncating until *sum* of discarded probability
- #weight reaches cutoff reached (or m==mindim)
- csum_rp = Float64.(CUDA.reverse(CUDA.cumsum(CUDA.reverse(rP))))
- sub_arr = csum_rp .- Float64(cutoff * scale)
- err_rP = sub_arr ./ abs.(sub_arr)
- flags = reinterpret(Float64, (signbit.(err_rP) .<< 1 .& 2) .<< 61)
- cut_ind = (CUDA.CUBLAS.iamax(length(err_rP), err_rP .* flags) - 1)
- if cut_ind > 0
- n = min(maxdim, cut_ind)
- n = max(n, mindim)
- else
- n = maxdim
- end
- truncerr = sum(rP[(n + 1):end])
- if scale == zero(T)
- truncerr = zero(T)
- else
- truncerr /= scale
- end
- end
- end
- if n < 1
- n = 1
- end
- if n < origm
- hP = collect(P)
- docut = (hP[n] + hP[n + 1]) / 2
- if abs(hP[n] - hP[n + 1]) < 1E-3 * hP[n]
- docut += T(1E-3) * hP[n]
- end
- end
- @timeit "setup return" begin
- rinds = 1:n
- rrP = P[rinds]
- end
- return truncerr, docut, rrP
-end
diff --git a/ITensorGPU/src/tensor/dense.jl b/ITensorGPU/src/tensor/dense.jl
deleted file mode 100644
index 04e1cb8ac6..0000000000
--- a/ITensorGPU/src/tensor/dense.jl
+++ /dev/null
@@ -1,285 +0,0 @@
-function contract!!(
- R::DenseTensor{<:Number,NR},
- labelsR::NTuple{NR},
- T1::DenseTensor{<:Number,N1},
- labelsT1::NTuple{N1},
- T2::DenseTensor{<:Number,N2},
- labelsT2::NTuple{N2},
- α::Number=1,
- β::Number=0,
-) where {NR,N1,N2}
- if N1 == 0
- (α ≠ 1 || β ≠ 0) &&
- error("contract!! not yet implemented for scalar ITensor with non-trivial α and β")
- # TODO: replace with an add! function?
- # What about doing `R .= T1[] .* PermutedDimsArray(T2,perm)`?
- perm = getperm(labelsR, labelsT2)
- R = permutedims!!(R, T2, perm, (r, t2) -> T1[] * t2)
- elseif N2 == 0
- (α ≠ 1 || β ≠ 0) &&
- error("contract!! not yet implemented for scalar ITensor with non-trivial α and β")
- perm = getperm(labelsR, labelsT1)
- R = permutedims!!(R, T1, perm, (r, t1) -> T2[] * t1)
- elseif N1 + N2 == NR
- (α ≠ 1 || β ≠ 0) && error(
- "contract!! not yet implemented for outer product tensor contraction with non-trivial α and β",
- )
- # TODO: permute T1 and T2 appropriately first (can be more efficient
- # then permuting the result of T1⊗T2)
- # TODO: implement the in-place version directly
- R = outer!!(R, T1, T2)
- labelsRp = (labelsT1..., labelsT2...)
- perm = getperm(labelsR, labelsRp)
- if !is_trivial_permutation(perm)
- R = permutedims!!(R, copy(R), perm)
- end
- else
- #if dim(T1) > 2^13 && dim(T2) > 2^13
- # R = _big_contract!!(R,labelsR,T1,labelsT1,T2,labelsT2, α, β)
- #else
- if α ≠ 1 || β ≠ 0
- R = _contract!!(R, labelsR, T1, labelsT1, T2, labelsT2, α, β)
- else
- R = _contract!!(R, labelsR, T1, labelsT1, T2, labelsT2)
- end
- #end
- end
- return R
-end
-
-function _big_contract!!(
- R::DenseTensor{<:Number,NR},
- labelsR,
- T1::DenseTensor{ElT1,N1},
- labelsT1,
- T2::DenseTensor{ElT2,N2},
- labelsT2,
- α::Number=1,
- β::Number=0,
-) where {ElT1,ElT2,N1,N2,NR}
- props = ContractionProperties(labelsT1, labelsT2, labelsR)
- compute_contraction_properties!(props, T1, T2, R)
- _big_contract!(R, T1, T2, props, α, β)
- #val, t, _ = @timed _blasmg_contract!(R,T1,T2,props,α,β)
- return R
-end
-
-function _big_contract!(
- CT::DenseTensor{El,NC},
- AT::DenseTensor{El,NA},
- BT::DenseTensor{El,NB},
- props::ContractionProperties,
- α::Number=one(El),
- β::Number=zero(El),
-) where {El,NC,NA,NB}
- Ainds = inds(AT)
- Adims = dims(Ainds)
- Binds = inds(BT)
- Bdims = dims(Binds)
- Cinds = inds(CT)
- Cdims = dims(Cinds)
- Adata = reshape(data(store(AT)), Adims)
- Bdata = reshape(data(store(BT)), Bdims)
- Cdata = reshape(data(store(CT)), Cdims)
- contracted = commoninds(Ainds, Binds)
- A_only = uniqueinds(Ainds, Binds)
- B_only = uniqueinds(Binds, Ainds)
- ind_dict = Vector{Index}()
- for (idx, i) in enumerate(contracted)
- push!(ind_dict, i)
- end
- if length(A_only) > 0
- for (idx, i) in enumerate(A_only)
- push!(ind_dict, i)
- end
- end
- if length(B_only) > 0
- for (idx, i) in enumerate(B_only)
- push!(ind_dict, i)
- end
- end
- ctainds = zeros(Int, length(Ainds))
- ctbinds = zeros(Int, length(Binds))
- ctcinds = zeros(Int, length(Cinds))
- for (ii, ia) in enumerate(Ainds)
- ctainds[ii] = findfirst(x -> x == ia, ind_dict)
- end
- for (ii, ib) in enumerate(Binds)
- ctbinds[ii] = findfirst(x -> x == ib, ind_dict)
- end
- for (ii, ic) in enumerate(Cinds)
- ctcinds[ii] = findfirst(x -> x == ic, ind_dict)
- end
- id_op = cuTENSOR.CUTENSOR_OP_IDENTITY
- dict_key = ""
- for cc in zip(ctcinds, Cdims)
- dict_key *= string(cc[1]) * "," * string(cc[2]) * ","
- end
- for aa in zip(ctainds, Adims)
- dict_key *= string(aa[1]) * "," * string(aa[2]) * ","
- end
- for bb in zip(ctbinds, Bdims)
- dict_key *= string(bb[1]) * "," * string(bb[2]) * ","
- end
- #=synchronize()
- if haskey(ENV, "CUTENSOR_AUTOTUNE") && tryparse(Int, ENV["CUTENSOR_AUTOTUNE"]) == 1
- if haskey(ContractionPlans, dict_key)
- dict_val = ContractionPlans[dict_key]
- algo = dict_val
- Cdata = cuTENSOR.contraction!(α, Adata, Vector{Char}(ctainds), id_op, Bdata, Vector{Char}(ctbinds), id_op, β, Cdata, Vector{Char}(ctcinds), id_op, id_op; algo=algo)
- synchronize()
- else
- # loop through all algos
- # pick the fastest one
- # store that plan!
- best_time = 1e6
- best_plan = nothing
- best_algo = nothing
- max_algos = Ref{Int32}(C_NULL)
- cuTENSOR.cutensorContractionMaxAlgos(max_algos)
- # fix once the other options are documented
- #algos = collect(Cint(cuTENSOR.CUTENSOR_ALGO_GETT):Cint(max_algos[] - 1))
- algos = collect(Cint(cuTENSOR.CUTENSOR_ALGO_GETT):Cint(-1))
- for algo in reverse(algos)
- try
- Cdata, this_time, bytes, gctime, memallocs = @timed cuTENSOR.contraction!(α, Adata, Vector{Char}(ctainds), id_op, Bdata, Vector{Char}(ctbinds), id_op, β, Cdata, Vector{Char}(ctcinds), id_op, id_op; algo=cuTENSOR.cutensorAlgo_t(algo))
- synchronize()
- if this_time < best_time
- best_time = this_time
- best_algo = cuTENSOR.cutensorAlgo_t(algo)
- end
- catch err
- @warn "Algorithm $algo not supported"
- end
- end
- ContractionPlans[dict_key] = best_algo
- end
- else
- =#
- Cdata .= zero(eltype(Cdata))
- #@show size(Adata)
- #@show size(Bdata)
- #@show size(Cdata)
- @assert !any(isnan.(Adata))
- @assert !any(isnan.(Bdata))
- @assert !any(isnan.(Cdata))
- #@show ctainds
- #@show ctbinds
- #@show ctcinds
- #flush(stdout)
- CUDA.Mem.pin(Adata)
- CUDA.Mem.pin(Bdata)
- CUDA.Mem.pin(Cdata)
- synchronize()
- #AC = CuArray(Adata)
- #BC = CuArray(Bdata)
- #CC = CuArray(Cdata)
- @assert !any(isnan.(Adata))
- @assert !any(isnan.(Bdata))
- @assert !any(isnan.(Cdata))
- #@assert !any(isnan.(AC))
- #@assert !any(isnan.(BC))
- #@assert !any(isnan.(CC))
- #CC = cuTENSOR.contraction!(α, AC, ctainds, id_op, BC, ctbinds, id_op, β, CC, ctcinds, id_op, id_op)
- #synchronize()
- #@assert !any(isnan.(AC))
- #@assert !any(isnan.(BC))
- #@assert !any(isnan.(CC))
- Cdata = cuTENSOR.contraction!(
- α, Adata, ctainds, id_op, Bdata, ctbinds, id_op, β, Cdata, ctcinds, id_op, id_op
- )
- synchronize()
- #end
- #CCh = collect(CC)
- #@assert !any(isnan.(CCh))
- #Cdata .= CCh
- @assert !any(isnan.(Adata))
- @assert !any(isnan.(Bdata))
- @assert !any(isnan.(Cdata))
- return parent(Cdata)
-end
-
-function _blasmg_contract!(
- CT::DenseTensor{El,NC},
- AT::DenseTensor{El,NA},
- BT::DenseTensor{El,NB},
- props::ContractionProperties,
- α::Number=one(El),
- β::Number=zero(El),
-) where {El,NC,NA,NB}
- # TODO: directly use Tensor instead of Array
- C = array(CT)
- A = array(AT)
- B = array(BT)
-
- tA = 'N'
- if props.permuteA
- pA = NTuple{NA,Int}(props.PA)
- @strided Ap = permutedims(A, pA)
- AM = reshape(Ap, props.dmid, props.dleft)
- tA = 'T'
- else
- #A doesn't have to be permuted
- if Atrans(props)
- AM = reshape(A, props.dmid, props.dleft)
- tA = 'T'
- else
- AM = reshape(A, props.dleft, props.dmid)
- end
- end
-
- tB = 'N'
- if props.permuteB
- pB = NTuple{NB,Int}(props.PB)
- @strided Bp = permutedims(B, pB)
- BM = reshape(Bp, props.dmid, props.dright)
- else
- if Btrans(props)
- BM = reshape(B, props.dright, props.dmid)
- tB = 'T'
- else
- BM = reshape(B, props.dmid, props.dright)
- end
- end
-
- # TODO: this logic may be wrong
- if props.permuteC
- # Need to copy here since we will be permuting
- # into C later
- CM = reshape(copy(C), props.dleft, props.dright)
- else
- if Ctrans(props)
- CM = reshape(C, props.dleft, props.dright)
- (AM, BM) = (BM, AM)
- if tA == tB
- tA = tB = (tA == 'T' ? 'N' : 'T')
- end
- else
- CM = reshape(C, props.dleft, props.dright)
- end
- end
-
- if length(AM) > 4096 && length(BM) > 4096 && length(CM) > 4096
- CM = CUBLASMG.mg_gemm!(
- tA,
- tB,
- El(α),
- AM,
- BM,
- El(β),
- CM;
- devs=devs[],
- dev_rows=dev_rows[],
- dev_cols=dev_cols[],
- )
- else
- BLAS.gemm!(tA, tB, El(α), AM, BM, El(β), CM)
- end
-
- if props.permuteC
- pC = NTuple{NC,Int}(props.PC)
- Cr = reshape(CM, props.newCrange...)
- @strided C .= permutedims(Cr, pC)
- end
- return C
-end
diff --git a/ITensorGPU/src/traits.jl b/ITensorGPU/src/traits.jl
deleted file mode 100644
index 6589298607..0000000000
--- a/ITensorGPU/src/traits.jl
+++ /dev/null
@@ -1,51 +0,0 @@
-# Trait type indicating the object is either on CPU
-# or on a CUDA device (for example a type that doesn't
-# have any data, like a Combiner or uniform Diagonal
-# tensor).
-struct CPUorCUDA end
-
-is_cu(::Type{<:Number}) = CPUorCUDA()
-is_cu(::Type{NDTensors.NoData}) = CPUorCUDA()
-is_cu(::Type{<:Array}) = false
-is_cu(::Type{<:CuArray}) = true
-
-# Handle Array wrappers like `ReshapedArray`.
-@traitfn function is_cu(arraytype::Type{T}) where {T; IsWrappedArray{T}}
- return is_cu(parenttype(arraytype))
-end
-
-is_cu(X::Type{<:TensorStorage}) = is_cu(NDTensors.datatype(X))
-is_cu(X::Type{<:Tensor}) = is_cu(NDTensors.storagetype(X))
-is_cu(::Type{ITensor}) = error("Unknown")
-
-is_cu(x::CuArray) = is_cu(typeof(x))
-is_cu(x::Array) = is_cu(typeof(x))
-
-is_cu(x::TensorStorage) = is_cu(typeof(x))
-is_cu(x::Tensor) = is_cu(typeof(x))
-is_cu(x::ITensor) = is_cu(typeof(tensor(x)))
-is_cu(x::MPS) = all(is_cu, x)
-is_cu(x::MPO) = all(is_cu, x)
-
-mixed_cu_cpu(::Bool, ::CPUorCUDA) = false
-mixed_cu_cpu(::CPUorCUDA, ::Bool) = false
-mixed_cu_cpu(::CPUorCUDA, ::CPUorCUDA) = false
-mixed_cu_cpu(is_cu1::Bool, is_cu2::Bool) = (is_cu1 ⊻ is_cu2)
-mixed_cu_cpu(T1::Type, T2::Type) = mixed_cu_cpu(is_cu(T1), is_cu(T2))
-
-@traitdef MixedCuCPU{T1,T2}
-
-#! format: off
-@traitimpl MixedCuCPU{T1,T2} <- mixed_cu_cpu(T1, T2)
-#! format: on
-
-@traitfn function can_contract(
- ::Type{T1}, ::Type{T2}
-) where {T1<:TensorStorage,T2<:TensorStorage;!MixedCuCPU{T1,T2}}
- return true
-end
-@traitfn function can_contract(
- ::Type{T1}, ::Type{T2}
-) where {T1<:TensorStorage,T2<:TensorStorage;MixedCuCPU{T1,T2}}
- return false
-end
diff --git a/ITensorGPU/test/Project.toml b/ITensorGPU/test/Project.toml
deleted file mode 100644
index a3d2e41464..0000000000
--- a/ITensorGPU/test/Project.toml
+++ /dev/null
@@ -1,7 +0,0 @@
-[deps]
-Combinatorics = "861a8166-3701-5b0c-9a16-15d98fcdc6aa"
-CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
-Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
diff --git a/ITensorGPU/test/runtests.jl b/ITensorGPU/test/runtests.jl
deleted file mode 100644
index 71a592139e..0000000000
--- a/ITensorGPU/test/runtests.jl
+++ /dev/null
@@ -1,30 +0,0 @@
-if VERSION < v"1.8" && "@v#.#" ∉ LOAD_PATH
- push!(LOAD_PATH, "@v#.#")
-end
-
-using ITensorGPU, Test, CUDA
-
-println("Running ITensorGPU tests with a runtime CUDA version: $(CUDA.runtime_version())")
-
-CUDA.allowscalar(false)
-@testset "ITensorGPU.jl" begin
- #@testset "$filename" for filename in ("test_cucontract.jl",)
- # println("Running $filename with autotune")
- # cmd = `$(Base.julia_cmd()) -e 'using Pkg; Pkg.activate(".."); Pkg.instantiate(); include("test_cucontract.jl")'`
- # run(pipeline(setenv(cmd, "CUTENSOR_AUTOTUNE" => 1); stdout=stdout, stderr=stderr))
- #end
- @testset "$filename" for filename in (
- "test_dmrg.jl",
- "test_cuitensor.jl",
- "test_cudiag.jl",
- "test_cudense.jl",
- "test_cucontract.jl",
- "test_cumpo.jl",
- "test_cumps.jl",
- "test_cutruncate.jl",
- #"test_pastaq.jl",
- )
- println("Running $filename")
- include(filename)
- end
-end
diff --git a/ITensorGPU/test/test_cucontract.jl b/ITensorGPU/test/test_cucontract.jl
deleted file mode 100644
index ad12e0ec6a..0000000000
--- a/ITensorGPU/test/test_cucontract.jl
+++ /dev/null
@@ -1,227 +0,0 @@
-using ITensors,
- ITensorGPU,
- LinearAlgebra, # For tr()
- Combinatorics, # For permutations()
- Random,
- CUDA,
- Test
-
-@testset "cuITensor $T Contractions" for T in (Float64, ComplexF64)
- mi, mj, mk, ml, ma = 2, 3, 4, 5, 6, 7
- i = Index(mi, "i")
- j = Index(mj, "j")
- k = Index(mk, "k")
- l = Index(ml, "l")
- a = Index(ma, "a")
- @testset "Test contract cuITensors" begin
- A = cuITensor(randomITensor(T))
- B = cuITensor(randomITensor(T))
- Ai = cuITensor(randomITensor(T, i))
- Bi = cuITensor(randomITensor(T, i))
- Aj = cuITensor(randomITensor(T, j))
- Aij = cuITensor(randomITensor(T, i, j))
- Aji = cuITensor(randomITensor(T, j, i))
- Bij = cuITensor(randomITensor(T, i, j))
- Aik = cuITensor(randomITensor(T, i, k))
- Ajk = cuITensor(randomITensor(T, j, k))
- Ajl = cuITensor(randomITensor(T, j, l))
- Akl = cuITensor(randomITensor(T, k, l))
- Aijk = cuITensor(randomITensor(T, i, j, k))
- Ajkl = cuITensor(randomITensor(T, j, k, l))
- Aikl = cuITensor(randomITensor(T, i, k, l))
- Akla = cuITensor(randomITensor(T, k, l, a))
- Aijkl = cuITensor(randomITensor(T, i, j, k, l))
- @testset "Test contract cuITensor (Scalar*Scalar -> Scalar)" begin
- C = A * B
- @test scalar(C) ≈ scalar(A) * scalar(B)
- C = cuITensor(T(2.0)) * cuITensor(T(2.0))
- @test scalar(C) ≈ T(4.0)
- end
- @testset "Test contract cuITensor (Scalar*Vector -> Vector)" begin
- C = A * Ai
- @test cpu(C) ≈ scalar(A) * cpu(Ai)
- C = cuITensor(T(2.0)) * Ai
- @test cpu(C) ≈ T(2.0) * cpu(Ai)
- C = Ai * cuITensor(T(2.0))
- @test cpu(C) ≈ T(2.0) * cpu(Ai)
- end
- @testset "Test contract cuITensor (Vector*Scalar -> Vector)" begin
- C = Aj * A
- @test cpu(C) ≈ scalar(A) * cpu(Aj)
- end
- @testset "Test contract cuITensors (Vectorᵀ*Vector -> Scalar)" begin
- C = Ai * Bi
- Ccpu = cpu(Ai) * cpu(Bi)
- @test scalar(Ccpu) ≈ scalar(C)
- end
- @testset "Test contract cuITensors (Vector*Vectorᵀ -> Matrix)" begin
- C = Ai * Aj
- Ccpu = cpu(Ai) * cpu(Aj)
- @test Ccpu ≈ cpu(permute(C, i, j))
- end
- @testset "Test contract cuITensors (Matrix*Scalar -> Matrix)" begin
- Aij = permute(Aij, i, j)
- C = Aij * A
- @test cpu(permute(C, i, j)) ≈ scalar(A) * cpu(Aij)
- end
- @testset "Test contract cuITensors (Matrixᵀ*Vector -> Vector)" begin
- cAij = permute(copy(Aij), j, i)
- Ccpu = cpu(Aij) * cpu(Aj)
- C = cAij * Aj
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Matrix*Vector -> Vector)" begin
- cpAij = permute(copy(Aij), i, j)
- Ccpu = cpu(cpAij) * cpu(Aj)
- C = copy(cpAij) * copy(Aj)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Vector*Matrix -> Vector)" begin
- Aij = permute(Aij, i, j)
- C = Ai * Aij
- Ccpu = cpu(Ai) * cpu(Aij)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Vector*Matrixᵀ -> Vector)" begin
- C = Ai * Aji
- Ccpu = cpu(Ai) * cpu(Aji)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Matrix*Matrix -> Scalar)" begin
- Aij = permute(Aij, i, j)
- Bij = permute(Bij, i, j)
- C = Aij * Bij
- Ccpu = cpu(Aij) * cpu(Bij)
- @test scalar(Ccpu) ≈ scalar(C)
- end
- @testset "Test contract cuITensors (Matrix*Matrix -> Matrix)" begin
- Aij = permute(Aij, i, j)
- Ajk = permute(Ajk, j, k)
- C = Aij * Ajk
- Ccpu = cpu(Aij) * cpu(Ajk)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Matrixᵀ*Matrix -> Matrix)" begin
- Aij = permute(Aij, j, i)
- Ajk = permute(Ajk, j, k)
- C = Aij * Ajk
- Ccpu = cpu(Aij) * cpu(Ajk)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Matrix*Matrixᵀ -> Matrix)" begin
- Aij = permute(Aij, i, j)
- Ajk = permute(Ajk, k, j)
- C = Aij * Ajk
- Ccpu = cpu(Aij) * cpu(Ajk)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Matrixᵀ*Matrixᵀ -> Matrix)" begin
- Aij = permute(Aij, j, i)
- Ajk = permute(Ajk, k, j)
- C = Aij * Ajk
- Ccpu = cpu(Aij) * cpu(Ajk)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (3-Tensor*Scalar -> 3-Tensor)" begin
- Aijk = permute(Aijk, i, j, k)
- C = Aijk * A
- @test cpu(permute(C, i, j, k)) ≈ scalar(A) * cpu(Aijk)
- end
- @testset "Test contract cuITensors (3-Tensor*Vector -> Matrix)" begin
- cAijk = permute(copy(Aijk), i, j, k)
- C = cAijk * Ai
- Ccpu = cpu(cAijk) * cpu(Ai)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (Vector*3-Tensor -> Matrix)" begin
- Aijk = permute(Aijk, i, j, k)
- C = Aj * Aijk
- Ccpu = cpu(Aj) * cpu(Aijk)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (3-Tensor*Matrix -> Vector)" begin
- Aijk = permute(Aijk, i, j, k)
- Aik = permute(Aik, i, k)
- C = Aijk * Aik
- Ccpu = cpu(Aijk) * cpu(Aik)
- @test Ccpu ≈ cpu(C)
- end
- @testset "Test contract cuITensors (3-Tensor*Matrix -> 3-Tensor)" begin
- Aijk = permute(Aijk, i, j, k)
- Ajl = permute(Ajl, j, l)
- C = Aijk * Ajl
- Ccpu = cpu(Aijk) * cpu(Ajl)
- @test Ccpu ≈ cpu(permute(C, i, k, l))
- end
- @testset "Test contract cuITensors (Matrix*3-Tensor -> 3-Tensor)" begin
- Aijk = permute(Aijk, i, j, k)
- Akl = permute(Akl, k, l)
- C = Akl * Aijk
- Ccpu = cpu(Aijk) * cpu(Akl)
- @test Ccpu ≈ cpu(permute(C, l, i, j))
- end
- @testset "Test contract cuITensors (3-Tensor*3-Tensor -> 3-Tensor)" begin
- Aijk = permute(Aijk, i, j, k)
- Ajkl = permute(Ajkl, j, k, l)
- C = Aijk * Ajkl
- Ccpu = cpu(Aijk) * cpu(Ajkl)
- @test Ccpu ≈ cpu(permute(C, i, l))
- end
- @testset "Test contract cuITensors (3-Tensor*3-Tensor -> 3-Tensor)" begin
- for inds_ijk in permutations([i, j, k]), inds_jkl in permutations([j, k, l])
- Aijk = permute(Aijk, inds_ijk...)
- Ajkl = permute(Ajkl, inds_jkl...)
- C = Ajkl * Aijk
- Ccpu = cpu(Ajkl) * cpu(Aijk)
- @test Ccpu ≈ cpu(C)
- end
- end
- @testset "Test contract cuITensors (4-Tensor*3-Tensor -> 3-Tensor)" begin
- for inds_ijkl in permutations([i, j, k, l]), inds_kla in permutations([k, l, a])
- Aijkl = permute(Aijkl, inds_ijkl...)
- Akla = permute(Akla, inds_kla...)
- C = Akla * Aijkl
- Ccpu = cpu(Akla) * cpu(Aijkl)
- @test Ccpu ≈ cpu(C)
- end
- end
- @testset "Test contract cuITensors (4-Tensor*3-Tensor -> 1-Tensor)" begin
- for inds_ijkl in permutations([i, j, k, l]), inds_jkl in permutations([j, k, l])
- Aijkl = permute(Aijkl, inds_ijkl...)
- Ajkl = permute(Ajkl, inds_jkl...)
- C = Ajkl * Aijkl
- Ccpu = cpu(Ajkl) * cpu(Aijkl)
- @test Ccpu ≈ cpu(C)
- end
- end
- @testset "Test supersized contract cuITensors (14-Tensor*14-Tensor -> 14-Tensor)" begin
- a_only_inds = [Index(2) for ii in 1:7]
- b_only_inds = [Index(2) for ii in 1:7]
- shared_inds = [Index(2) for ii in 1:7]
- A = randomITensor(IndexSet(vcat(a_only_inds, shared_inds)))
- B = randomITensor(IndexSet(vcat(b_only_inds, shared_inds)))
- cA = cuITensor(A)
- cB = cuITensor(B)
- inds_a = vcat(a_only_inds, shared_inds)
- inds_b = vcat(b_only_inds, shared_inds)
- cA_ = permute(cA, inds_a...)
- cB_ = permute(cB, inds_b...)
- @disable_warn_order begin
- C = cA_ * cB_
- Ccpu = cpu(cA_) * cpu(cB_)
- end
- @test Ccpu ≈ cpu(C)
- for shuffles in 1:1 # too many permutations to test all
- inds_a = shuffle(vcat(a_only_inds, shared_inds))
- inds_b = shuffle(vcat(b_only_inds, shared_inds))
- cA_ = permute(cA, inds_a...)
- cB_ = permute(cB, inds_b...)
- @disable_warn_order begin
- C = cA_ * cB_
- Ccpu = cpu(cA_) * cpu(cB_)
- end
- @test Ccpu ≈ cpu(C)
- end
- end
- end # End contraction testset
-end
diff --git a/ITensorGPU/test/test_cudense.jl b/ITensorGPU/test/test_cudense.jl
deleted file mode 100644
index ef01eaff98..0000000000
--- a/ITensorGPU/test/test_cudense.jl
+++ /dev/null
@@ -1,118 +0,0 @@
-using ITensors,
- ITensorGPU,
- LinearAlgebra, # For tr()
- Combinatorics, # For permutations()
- CUDA,
- Test
-
-# gpu tests!
-@testset "cuITensor, Dense{$SType} storage" for SType in (Float64, ComplexF64)
- mi, mj, mk, ml, ma = 2, 3, 4, 5, 6
- i = Index(mi, "i")
- j = Index(mj, "j")
- k = Index(mk, "k")
- l = Index(ml, "l")
- a = Index(ma, "a")
- indices = [i, j, k, l, a]
- @testset "Test add CuDense" begin
- A = [SType(1.0) for ii in 1:dim(i), jj in 1:dim(j)]
- dA = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(
- SType(1.0), dim(i) * dim(j)
- )
- B = [SType(2.0) for ii in 1:dim(i), jj in 1:dim(j)]
- dB = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(
- SType(2.0), dim(i) * dim(j)
- )
- dC = +(dA, IndexSet(i, j), dB, IndexSet(j, i))
- hC = collect(dC)
- @test collect(A + B) ≈ hC
- end
- @testset "Test2 add CuDense" begin
- for i1 in indices, i2 in indices
- i1 == i2 && continue
- A = randomITensor(SType, i1, i2)
- B = randomITensor(SType, i1, i2)
- cuA = cu(A)
- cuB = cu(B)
- C = A + B
- cuC = cuA + cuB
- @test C ≈ cpu(cuC) #move to CPU to avoid scalar indexing error on GPU
- @test A ≈ cpu(cuA) #check does operation `+` modify cuA
- @test B ≈ cpu(cuB) #check does operation `+` modify cuB
- cuA += cuB
- @test cuC ≈ cuA
- @test B ≈ cpu(cuB) #check does operation `+=`` modify cuB
- end
- end
-
- @testset "Test subtract CuDense" begin
- A = [SType(1.0) for ii in 1:dim(i), jj in 1:dim(j)]
- dA = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(
- SType(1.0), dim(i) * dim(j)
- )
- B = [SType(2.0) for ii in 1:dim(i), jj in 1:dim(j)]
- dB = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(
- SType(2.0), dim(i) * dim(j)
- )
- dC = -(dA, IndexSet(i, j), dB, IndexSet(i, j))
- hC = collect(dC)
- @test A - B ≈ hC
- end
- @testset "Test2 subtract CuDense" begin
- for i1 in indices, i2 in indices
- i1 == i2 && continue
- A = randomITensor(SType, i1, i2)
- B = randomITensor(SType, i1, i2)
- cuA = cu(A)
- cuB = cu(B)
- C = A - B
- cuC = cuA - cuB
- @test C ≈ cpu(cuC) #move to CPU to avoid scalar indexing error on GPU
- @test A ≈ cpu(cuA) #check does operation `-` modify cuA
- @test B ≈ cpu(cuB) #check does operation `-` modify cuB
- cuA -= cuB
- @test cuC ≈ cuA
- @test B ≈ cpu(cuB) #check does operation `-=`` modify cuB
- #end
- end
- end
- @testset "Test permute CuDense" begin
- A = [SType(ii * jj) for ii in 1:dim(i), jj in 1:dim(j)]
- dA = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(
- NDTensors.Dense(vec(A))
- )
- B = [SType(0.0) for ii in 1:dim(j), jj in 1:dim(j)]
- dB = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(
- SType(0.0), dim(i) * dim(j)
- )
- dC = permute!(dB, IndexSet(j, i), dA, IndexSet(i, j))
- hC = cpu(dC)
- @test transpose(A) == hC
- end
- @testset "Test move CuDense on/off GPU" begin
- A = [SType(1.0) for ii in 1:dim(i), jj in 1:dim(j)]
- dA = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(
- NDTensors.Dense(vec(A))
- )
- dB = convert(NDTensors.Dense{SType,Vector{SType}}, dA)
- @test NDTensors.data(dB) == vec(A)
- end
- @testset "Test basic CuDense features" begin
- @test NDTensors.Dense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(10) isa
- ITensorGPU.CuDense{SType}
- @test complex(NDTensors.Dense{SType,CuVector{SType}}) ==
- NDTensors.Dense{complex(SType),CuVector{complex(SType)}}
- end
- if SType == Float64
- @testset "Test CuDense complex" begin
- A = CUDA.rand(SType, dim(i) * dim(j))
- dA = ITensorGPU.CuDense{SType,CuVector{SType,ITensorGPU.default_buffertype()}}(A)
- dC = complex(dA)
- @test typeof(dC) !== typeof(dA)
- cdC = CuArray(dC)
- hC = collect(cdC)
- ccA = complex.(A)
- @test hC == collect(ccA)
- end
- end
-end # End Dense storage test
diff --git a/ITensorGPU/test/test_cudiag.jl b/ITensorGPU/test/test_cudiag.jl
deleted file mode 100644
index c618d44fa5..0000000000
--- a/ITensorGPU/test/test_cudiag.jl
+++ /dev/null
@@ -1,101 +0,0 @@
-using ITensors,
- ITensors.NDTensors,
- ITensorGPU,
- LinearAlgebra, # For tr()
- Combinatorics, # For permutations()
- CUDA,
- Test
-
-@testset "cuITensor $T Contractions" for T in (Float64, ComplexF64)
- mi, mj, mk, ml, ma = 2, 3, 4, 5, 6, 7
- i = Index(mi, "i")
- j = Index(mj, "j")
- k = Index(mk, "k")
- l = Index(ml, "l")
- a = Index(ma, "a")
- @testset "Test contract cuITensors" begin
- Aij = cuITensor(randomITensor(T, i, j))
- Aji = cuITensor(randomITensor(T, j, i))
- Bij = cuITensor(randomITensor(T, i, j))
- Aik = cuITensor(randomITensor(T, i, k))
- Ajk = cuITensor(randomITensor(T, j, k))
- Ajl = cuITensor(randomITensor(T, j, l))
- Akl = cuITensor(randomITensor(T, k, l))
- Dv = rand(T, mi)
- D = itensor(ITensors.tensor(NDTensors.Diag(CuVector(Dv)), IndexSet(i, i')))
- Ev = rand(T, mi)
- E = itensor(ITensors.tensor(NDTensors.Diag(CuVector(Ev)), IndexSet(i, i'')))
- @testset "Test contract cuITensors (Matrix*Diag -> Matrix)" begin
- C = Aij * D
- @test collect(CuArray(C)) ≈ collect(CuMatrix(Aij, j, i)) * diagm(0 => Dv)
- end
- @testset "Test contract cuDiagITensors (Diag*Diag -> Diag)" begin
- C = E * D
- cC = CuArray(C)
- @test collect(cC) ≈ diagm(0 => Ev) * diagm(0 => Dv)
- end
- @testset "Test contract cuDiagITensors (UniformDiag*Diag -> Diag)" begin
- scal = itensor(ITensors.tensor(NDTensors.Diag(2.0), IndexSet(i, i'')))
- C = scal * D
- @test collect(CuArray(C)) ≈ 2.0 .* diagm(0 => Dv)
- C = D * scal
- @test collect(CuArray(C)) ≈ 2.0 .* diagm(0 => Dv)
- end
- @testset "Test contract cuITensors (Matrix*UniformDiag -> Matrix)" begin
- scal = itensor(ITensors.tensor(NDTensors.Diag(T(2.0)), IndexSet(i, i')))
- C = scal * Aij
- @test cpu(C) ≈ 2.0 * cpu(replaceind(Aij, i, i')) atol = 1e-4
- C = Aij * scal
- @test_broken cpu(C) ≈ 2.0 * cpu(replaceind(permute(Aij, j, i), i, i')) atol = 1e-4
- end
- end # End contraction testset
-end
-
-@testset "cuITensor $T1, $T2 Contractions" for T1 in (Float64, ComplexF64),
- T2 in (Float64, ComplexF64)
-
- mi, mj, mk, ml, ma = 2, 3, 4, 5, 6, 7
- i = Index(mi, "i")
- j = Index(mj, "j")
- k = Index(mk, "k")
- l = Index(ml, "l")
- a = Index(ma, "a")
- @testset "Test contract cuITensors" begin
- Aij = cuITensor(randomITensor(T1, i, j))
- Aji = cuITensor(randomITensor(T1, j, i))
- Bij = cuITensor(randomITensor(T1, i, j))
- Dv = rand(T2, mi)
- D = itensor(ITensors.tensor(NDTensors.Diag(CuVector(Dv)), IndexSet(i, i')))
- Ev = rand(T2, mi)
- E = itensor(ITensors.tensor(NDTensors.Diag(CuVector(Ev)), IndexSet(i, i'')))
- @testset "Test contract cuITensors (Matrix*Diag -> Matrix)" begin
- C = Aij * D
- cC = CuArray(C)
- @test collect(cC) ≈ collect(CuMatrix(Aij, j, i)) * diagm(0 => Dv)
- end
- @testset "Test contract cuDiagITensors (Diag*Diag -> Diag)" begin
- C = E * D
- cC = CuArray(C)
- @test collect(cC) ≈ diagm(0 => Ev) * diagm(0 => Dv)
- end
- @testset "Test contract cuDiagITensors (UniformDiag*Diag -> Diag)" begin
- scal = itensor(ITensors.tensor(NDTensors.Diag(T2(2.0)), IndexSet(i, i'')))
- C = scal * D
- cC = CuArray(C)
- @test collect(cC) ≈ 2.0 .* diagm(0 => Dv)
- C = D * scal
- cC = CuArray(C)
- @test collect(cC) ≈ 2.0 .* diagm(0 => Dv)
- end
- @testset "Test contract cuITensors (Matrix*UniformDiag -> Matrix)" begin
- scal = itensor(ITensors.tensor(NDTensors.Diag(T2(2.0)), IndexSet(i, i')))
- C = scal * Aij
- cC = CuArray(C)
- @test collect(cC) ≈ array(2.0 * cpu(replaceind(Aij, i, i'))) atol = 1e-4
- C = Aij * scal
- cC = CuArray(C)
- @test_broken collect(cC) ≈ array(2.0 * cpu(replaceind(permute(Aij, j, i), i, i'))) atol =
- 1e-4
- end
- end # End contraction testset
-end
diff --git a/ITensorGPU/test/test_cuitensor.jl b/ITensorGPU/test/test_cuitensor.jl
deleted file mode 100644
index 886ad6d22f..0000000000
--- a/ITensorGPU/test/test_cuitensor.jl
+++ /dev/null
@@ -1,145 +0,0 @@
-using ITensors,
- ITensorGPU,
- LinearAlgebra, # For tr()
- Combinatorics, # For permutations()
- Random,
- CUDA,
- Test
-
-# gpu tests!
-@testset "cuITensor, Dense{$SType} storage" for SType in (Float64, ComplexF64)
- mi, mj, mk, ml, ma = 2, 3, 4, 5, 6, 7
- i = Index(mi, "i")
- j = Index(mj, "j")
- k = Index(mk, "k")
- l = Index(ml, "l")
- a = Index(ma, "a")
- @testset "Constructor" begin
- A = cuITensor(one(SType), i, j, k)
- @test collect(CuArray(A, i, j, k)) == ones(SType, dim(i), dim(j), dim(k))
- A = randomCuITensor(IndexSet(i, j, k))
- @test inds(A) == IndexSet(i, j, k)
- @test ITensorGPU.storage(A) isa ITensorGPU.CuDense
- Aarr = rand(SType, dim(i) * dim(j) * dim(k))
- @test cpu(ITensor(Aarr, i, j, k)) == cpu(cuITensor(Aarr, i, j, k))
- @test cuITensor(SType, i, j, k) isa ITensor
- @test storage(cuITensor(SType, i, j, k)) isa ITensorGPU.CuDense{SType}
- @test vec(collect(CuArray(ITensor(Aarr, i, j, k), i, j, k))) == Aarr
- end
- @testset "Test permute(cuITensor,Index...)" begin
- CA = randomCuITensor(SType, i, k, j)
- permCA = permute(CA, k, j, i)
- permA = cpu(permCA)
- @test k == inds(permA)[1]
- @test j == inds(permA)[2]
- @test i == inds(permA)[3]
- A = cpu(CA)
- for ii in 1:dim(i), jj in 1:dim(j), kk in 1:dim(k)
- @test A[k => kk, i => ii, j => jj] == permA[i => ii, j => jj, k => kk]
- end
- end
- @testset "Test permute(cuITensor,Index...) for large tensors" begin
- inds = [Index(2) for ii in 1:14]
- A = randomITensor(SType, IndexSet(inds))
- CA = cuITensor(A)
- for shuffle_count in 1:20
- perm_inds = shuffle(inds)
- permCA = permute(CA, perm_inds...)
- permA = cpu(permCA)
- pA = permute(A, perm_inds...)
- for ci in CartesianIndices(pA)
- @test pA[ci] == permA[ci]
- end
- end
- end
- #=@testset "Test scalar(cuITensor)" begin
- x = SType(34)
- A = randomCuITensor(a)
- @test x==scalar(A)
- end=#
- @testset "Test CuVector(cuITensor)" begin
- v = CuVector(ones(SType, dim(a)))
- A = cuITensor(v, a)
- @test v == CuVector(A)
- end
- @testset "Test CuMatrix(cuITensor)" begin
- v = CuMatrix(ones(SType, dim(a), dim(l)))
- A = cuITensor(vec(v), a, l)
- @test v == CuMatrix(A, a, l)
- A = cuITensor(vec(v), a, l)
- @test v == CuMatrix(A)
- A = cuITensor(vec(v), a, l)
- @test v == CuArray(A, a, l)
- @test v == CuArray(A)
- end
- @testset "Test norm(cuITensor)" begin
- A = randomCuITensor(SType, i, j, k)
- B = dag(A) * A
- @test norm(A) ≈ sqrt(real(scalar(B)))
- end
- @testset "Test complex(cuITensor)" begin
- A = randomCuITensor(SType, i, j, k)
- cA = complex(A)
- @test complex.(CuArray(A)) == CuArray(cA)
- end
- #@testset "Test exp(cuITensor)" begin
- # A = randomCuITensor(SType,i,i')
- # @test CuArray(exp(A,i,i')) ≈ exp(CuArray(A))
- #end
- @testset "Test add cuITensors" begin
- dA = randomCuITensor(SType, i, j, k)
- dB = randomCuITensor(SType, k, i, j)
- A = cpu(dA)
- B = cpu(dB)
- C = cpu(dA + dB)
- @test CuArray(permute(C, i, j, k)) ==
- CuArray(permute(A, i, j, k)) + CuArray(permute(B, i, j, k))
- for ii in 1:dim(i), jj in 1:dim(j), kk in 1:dim(k)
- @test C[i => ii, j => jj, k => kk] ==
- A[j => jj, i => ii, k => kk] + B[i => ii, k => kk, j => jj]
- end
- end
-
- @testset "Test factorizations of a cuITensor" begin
- A = randomCuITensor(SType, i, j, k, l)
- @testset "Test SVD of a cuITensor" begin
- U, S, V = svd(A, (j, l))
- u = commonind(U, S)
- v = commonind(S, V)
- @test cpu(A) ≈ cpu(U * S * V)
- @test cpu(U * dag(prime(U, u))) ≈ δ(SType, u, u') rtol = 1e-14
- @test cpu(V * dag(prime(V, v))) ≈ δ(SType, v, v') rtol = 1e-14
- end
-
- A = randomCuITensor(SType, i, j, k, l)
- @testset "Test SVD consistency between CPU and GPU" begin
- U_gpu, S_gpu, V_gpu = svd(A, (j, l))
- U_cpu, S_cpu, V_cpu = svd(cpu(A), (j, l))
- @test cpu(U_gpu) * cpu(S_gpu) * cpu(V_gpu) ≈ U_cpu * S_cpu * V_cpu
- end
-
- #=@testset "Test SVD truncation" begin
- M = randn(4,4)
- (U,s,V) = svd(M)
- ii = Index(4)
- jj = Index(4)
- S = Diagonal(s)
- T = cuITensor(vec(CuArray(U*S*V')),IndexSet(ii,jj))
- (U,S,V) = svd(T,ii;maxm=2)
- @test norm(U*S*V-T)≈sqrt(s[3]^2+s[4]^2)
- end=#
-
- @testset "Test QR decomposition of a cuITensor" begin
- Q, R = qr(A, (i, l))
- q = commonind(Q, R)
- @test cpu(A) ≈ cpu(Q * R)
- @test cpu(Q * dag(prime(Q, q))) ≈ δ(SType, q, q') atol = 1e-14
- end
-
- #=@testset "Test polar decomposition of a cuITensor" begin
- U,P = polar(A,(k,l))
- @test cpu(A)≈cpu(U*P)
- end=#
-
- end # End ITensor factorization testset
-end # End Dense storage test
diff --git a/ITensorGPU/test/test_cumpo.jl b/ITensorGPU/test/test_cumpo.jl
deleted file mode 100644
index 059aeb7bdb..0000000000
--- a/ITensorGPU/test/test_cumpo.jl
+++ /dev/null
@@ -1,184 +0,0 @@
-using ITensors, ITensorGPU, Test
-
-@testset "CuMPO Basics" begin
- N = 6
- sites = [Index(2, "Site") for n in 1:N]
- @test length(cuMPO()) == 0
- O = cuMPO(sites)
- @test length(O) == N
-
- str = split(sprint(show, O), '\n')
- @test str[1] == "MPO"
- @test length(str) == length(O) + 2
-
- O[1] = cuITensor(sites[1], prime(sites[1]))
- @test hasind(O[1], sites[1])
- @test hasind(O[1], prime(sites[1]))
- P = copy(O)
- @test hasind(P[1], sites[1])
- @test hasind(P[1], prime(sites[1]))
-
- K = randomCuMPO(sites)
- K_ = cuMPO(ITensors.data(K))
- @test all(ITensors.data(K) .== ITensors.data(K_))
-
- s = siteinds("S=1/2", N)
- L = randomMPO(s)
- K = cuMPO(L)
- @test all(ITensors.data(cpu(K)) .== ITensors.data(cpu(L)))
- @testset "orthogonalize" begin
- phi = randomCuMPS(sites)
- K = randomCuMPO(sites)
- orthogonalize!(phi, 1)
- orthogonalize!(K, 1)
- orig_inner = inner(phi', K, phi)
- orthogonalize!(phi, div(N, 2))
- orthogonalize!(K, div(N, 2))
- @test inner(phi', K, phi) ≈ orig_inner
- end
-
- @testset "inner " begin
- phi = randomCuMPS(sites)
- K = randomCuMPO(sites)
- @test maxlinkdim(K) == 1
- psi = randomCuMPS(sites)
- phidag = dag(phi)
- prime!(phidag)
- phiKpsi = phidag[1] * K[1] * psi[1]
- for j in 2:N
- phiKpsi *= phidag[j] * K[j] * psi[j]
- end
- @test phiKpsi[] ≈ inner(phi', K, psi)
-
- badsites = [Index(2, "Site") for n in 1:(N + 1)]
- badpsi = randomCuMPS(badsites)
- @test_throws DimensionMismatch inner(phi', K, badpsi)
-
- # make bigger random MPO...
- for link_dim in 2:5
- mpo_tensors = ITensor[cuITensor() for ii in 1:N]
- mps_tensors = ITensor[cuITensor() for ii in 1:N]
- mps_tensors2 = ITensor[cuITensor() for ii in 1:N]
- mpo_link_inds = [Index(link_dim, "r$ii,Link") for ii in 1:(N - 1)]
- mps_link_inds = [Index(link_dim, "r$ii,Link") for ii in 1:(N - 1)]
- mpo_tensors[1] = randomCuITensor(mpo_link_inds[1], sites[1], sites[1]')
- mps_tensors[1] = randomCuITensor(mps_link_inds[1], sites[1])
- mps_tensors2[1] = randomCuITensor(mps_link_inds[1], sites[1])
- for ii in 2:(N - 1)
- mpo_tensors[ii] = randomCuITensor(
- mpo_link_inds[ii], mpo_link_inds[ii - 1], sites[ii], sites[ii]'
- )
- mps_tensors[ii] = randomCuITensor(
- mps_link_inds[ii], mps_link_inds[ii - 1], sites[ii]
- )
- mps_tensors2[ii] = randomCuITensor(
- mps_link_inds[ii], mps_link_inds[ii - 1], sites[ii]
- )
- end
- mpo_tensors[N] = randomCuITensor(mpo_link_inds[N - 1], sites[N], sites[N]')
- mps_tensors[N] = randomCuITensor(mps_link_inds[N - 1], sites[N])
- mps_tensors2[N] = randomCuITensor(mps_link_inds[N - 1], sites[N])
- K = MPO(mpo_tensors, 0, N + 1)
- psi = MPS(mps_tensors, 0, N + 1)
- phi = MPS(mps_tensors2, 0, N + 1)
- orthogonalize!(psi, 1; maxdim=link_dim)
- orthogonalize!(K, 1; maxdim=link_dim)
- orthogonalize!(phi, 1; normalize=true, maxdim=link_dim)
- phidag = dag(phi)
- prime!(phidag)
- phiKpsi = phidag[1] * K[1] * psi[1]
- for j in 2:N
- phiKpsi *= phidag[j] * K[j] * psi[j]
- end
- @test scalar(phiKpsi) ≈ inner(phi', K, psi)
- end
- end
-
- @testset "contract" begin
- phi = randomCuMPS(sites)
- K = randomCuMPO(sites)
- @test maxlinkdim(K) == 1
- psi = randomCuMPS(sites)
- psi_out = contract(K, psi; maxdim=1)
- @test inner(phi', psi_out) ≈ inner(phi', K, psi)
- @test_throws MethodError contract(K', psi, method="fakemethod")
-
- badsites = [Index(2, "Site") for n in 1:(N + 1)]
- badpsi = randomCuMPS(badsites)
- @test_throws DimensionMismatch contract(K, badpsi)
-
- # make bigger random MPO...
- for link_dim in 2:5
- mpo_tensors = ITensor[ITensor() for ii in 1:N]
- mps_tensors = ITensor[ITensor() for ii in 1:N]
- mps_tensors2 = ITensor[ITensor() for ii in 1:N]
- mpo_link_inds = [Index(link_dim, "r$ii,Link") for ii in 1:(N - 1)]
- mps_link_inds = [Index(link_dim, "r$ii,Link") for ii in 1:(N - 1)]
- mpo_tensors[1] = randomCuITensor(mpo_link_inds[1], sites[1], sites[1]')
- mps_tensors[1] = randomCuITensor(mps_link_inds[1], sites[1])
- mps_tensors2[1] = randomCuITensor(mps_link_inds[1], sites[1])
- for ii in 2:(N - 1)
- mpo_tensors[ii] = randomCuITensor(
- mpo_link_inds[ii], mpo_link_inds[ii - 1], sites[ii], sites[ii]'
- )
- mps_tensors[ii] = randomCuITensor(
- mps_link_inds[ii], mps_link_inds[ii - 1], sites[ii]
- )
- mps_tensors2[ii] = randomCuITensor(
- mps_link_inds[ii], mps_link_inds[ii - 1], sites[ii]
- )
- end
- mpo_tensors[N] = randomCuITensor(mpo_link_inds[N - 1], sites[N], sites[N]')
- mps_tensors[N] = randomCuITensor(mps_link_inds[N - 1], sites[N])
- mps_tensors2[N] = randomCuITensor(mps_link_inds[N - 1], sites[N])
- K = MPO(mpo_tensors, 0, N + 1)
- psi = MPS(mps_tensors, 0, N + 1)
- phi = MPS(mps_tensors2, 0, N + 1)
- orthogonalize!(psi, 1; maxdim=link_dim)
- orthogonalize!(K, 1; maxdim=link_dim)
- orthogonalize!(phi, 1; normalize=true, maxdim=link_dim)
- psi_out = contract(deepcopy(K), deepcopy(psi); maxdim=10 * link_dim, cutoff=0.0)
- @test inner(phi', psi_out) ≈ inner(phi', K, psi)
- end
- end
- @testset "add" begin
- shsites = siteinds("S=1/2", N)
- K = randomCuMPO(shsites)
- L = randomCuMPO(shsites)
- M = add(K, L)
- @test length(M) == N
- psi = randomCuMPS(shsites)
- k_psi = contract(K, psi; maxdim=1)
- l_psi = contract(L, psi; maxdim=1)
- @test inner(psi', add(k_psi, l_psi)) ≈ inner(psi', M, psi) atol = 5e-3
- end
- @testset "contract(::CuMPO, ::CuMPO)" begin
- psi = randomCuMPS(sites)
- K = randomCuMPO(sites)
- L = randomCuMPO(sites)
- @test maxlinkdim(K) == 1
- @test maxlinkdim(L) == 1
- KL = contract(prime(K), L; maxdim=1)
- psi_kl_out = contract(prime(K), contract(L, psi; maxdim=1); maxdim=1)
- @test inner(psi'', KL, psi) ≈ inner(psi'', psi_kl_out) atol = 5e-3
-
- # where both K and L have differently labelled sites
- othersitesk = [Index(2, "Site,aaa") for n in 1:N]
- othersitesl = [Index(2, "Site,bbb") for n in 1:N]
- K = randomCuMPO(sites)
- L = randomCuMPO(sites)
- for ii in 1:N
- replaceind!(K[ii], sites[ii]', othersitesk[ii])
- replaceind!(L[ii], sites[ii]', othersitesl[ii])
- end
- KL = contract(K, L; maxdim=1)
- psik = randomCuMPS(othersitesk)
- psil = randomCuMPS(othersitesl)
- psi_kl_out = contract(K, contract(L, psil; maxdim=1); maxdim=1)
- @test inner(psik, KL, psil) ≈ inner(psik, psi_kl_out) atol = 5e-3
-
- badsites = [Index(2, "Site") for n in 1:(N + 1)]
- badL = randomCuMPO(badsites)
- @test_throws DimensionMismatch contract(K, badL)
- end
-end
diff --git a/ITensorGPU/test/test_cumps.jl b/ITensorGPU/test/test_cumps.jl
deleted file mode 100644
index 1ebe28b4d9..0000000000
--- a/ITensorGPU/test/test_cumps.jl
+++ /dev/null
@@ -1,275 +0,0 @@
-using ITensors, ITensorGPU, Test
-@testset "cuMPS Basics" begin
- N = 10
- sites = [Index(2, "Site") for n in 1:N]
- psi = cuMPS(sites)
- @test length(psi) == N
- @test length(cuMPS()) == 0
-
- str = split(sprint(show, psi), '\n')
- @test str[1] == "MPS"
- @test length(str) == length(psi) + 2
-
- @test siteind(psi, 2) == sites[2]
- @test hasind(psi[3], linkind(psi, 2))
- @test hasind(psi[3], linkind(psi, 3))
-
- psi[1] = cuITensor(sites[1])
- @test hasind(psi[1], sites[1])
-
- L = randomMPS(sites)
- K = cuMPS(L)
- @test all(ITensors.data(cpu(K)) .== ITensors.data(cpu(L)))
-
- @testset "cuproductMPS" begin
- @testset "vector of string input" begin
- sites = siteinds("S=1/2", N)
- state = fill("", N)
- for j in 1:N
- state[j] = isodd(j) ? "Up" : "Dn"
- end
- psi = productCuMPS(sites, state)
- for j in 1:N
- sign = isodd(j) ? +1.0 : -1.0
- ops = cuITensor(op(sites, "Sz", j))
- psip = prime(psi[j], "Site")
- res = psi[j] * ops * dag(psip)
- @test res[] ≈ sign / 2
- end
- @test_throws DimensionMismatch cuMPS(sites, fill("", N - 1))
- @test_throws DimensionMismatch productCuMPS(sites, fill("", N - 1))
- end
-
- @testset "vector of int input" begin
- sites = siteinds("S=1/2", N)
- state = fill(0, N)
- for j in 1:N
- state[j] = isodd(j) ? 1 : 2
- end
- psi = productCuMPS(sites, state)
- for j in 1:N
- sign = isodd(j) ? +1.0 : -1.0
- ops = cuITensor(op(sites, "Sz", j))
- psip = prime(psi[j], "Site")
- @test (psi[j] * ops * dag(psip))[] ≈ sign / 2
- end
- end
- end
-
- @testset "randomMPS" begin
- phi = randomCuMPS(sites)
- @test hasind(phi[1], sites[1])
- @test norm(phi[1]) ≈ 1.0
- @test hasind(phi[4], sites[4])
- @test norm(phi[4]) ≈ 1.0
- end
-
- @testset "inner different MPS" begin
- phi = randomMPS(sites)
- psi = randomMPS(sites)
- phipsi = dag(phi[1]) * psi[1]
- for j in 2:N
- phipsi *= dag(phi[j]) * psi[j]
- end
- @test phipsi[] ≈ inner(phi, psi)
- phi = randomCuMPS(sites)
- psi = randomCuMPS(sites)
- cphi = MPS([cpu(phi[i]) for i in 1:length(phi)])
- cpsi = MPS([cpu(psi[i]) for i in 1:length(psi)])
- phipsi = dag(phi[1]) * psi[1]
- cphipsi = dag(cphi[1]) * cpsi[1]
- for j in 2:N
- phipsi *= dag(phi[j]) * psi[j]
- cphipsi *= dag(cphi[j]) * cpsi[j]
- end
- @test cpu(phipsi)[] ≈ cphipsi[]
- @test cpu(phipsi)[] ≈ inner(cphi, cpsi)
- @test cpu(phipsi)[] ≈ inner(phi, psi)
- phipsi = dag(phi[1]) * psi[1]
- for j in 2:N
- phipsi = phipsi * dag(phi[j]) * psi[j]
- end
- @test cpu(phipsi)[] ≈ inner(phi, psi)
-
- badsites = [Index(2) for n in 1:(N + 1)]
- badpsi = randomCuMPS(badsites)
- @test_throws DimensionMismatch inner(phi, badpsi)
- end
-
- @testset "inner same MPS" begin
- psi = randomMPS(sites)
- psidag = dag(deepcopy(psi))
- ITensors.prime_linkinds!(psidag)
- psipsi = psidag[1] * psi[1]
- for j in 2:N
- psipsi = psipsi * psidag[j] * psi[j]
- end
- @test psipsi[] ≈ inner(psi, psi)
- psi = randomCuMPS(sites)
- psidag = dag(deepcopy(psi))
- ITensors.prime_linkinds!(psidag)
- psipsi = psidag[1] * psi[1]
- for j in 2:N
- psipsi = psipsi * psidag[j] * psi[j]
- end
- @test psipsi[] ≈ inner(psi, psi)
- end
-
- @testset "add MPS" begin
- psi = randomMPS(sites)
- phi = deepcopy(psi)
- xi = add(psi, phi)
- @test inner(xi, xi) ≈ 4.0 * inner(psi, psi)
- psi = randomCuMPS(sites)
- phi = deepcopy(psi)
- xi = add(psi, phi)
- @test inner(xi, xi) ≈ 4.0 * inner(psi, psi)
- end
-
- sites = siteinds("S=1/2", N)
- psi = cuMPS(sites)
- @test length(psi) == N # just make sure this works
- @test length(siteinds(psi)) == N
-
- psi = randomCuMPS(sites)
- orthogonalize!(psi, N - 1)
- @test ITensors.leftlim(psi) == N - 2
- @test ITensors.rightlim(psi) == N
- orthogonalize!(psi, 2)
- @test ITensors.leftlim(psi) == 1
- @test ITensors.rightlim(psi) == 3
- psi = randomCuMPS(sites)
- psi.rlim = N + 1 # do this to test qr from rightmost tensor
- orthogonalize!(psi, div(N, 2))
- @test ITensors.leftlim(psi) == div(N, 2) - 1
- @test ITensors.rightlim(psi) == div(N, 2) + 1
-
- #@test_throws ErrorException linkind(MPS(N, fill(cuITensor(), N), 0, N + 1), 1)
-
- @testset "replacebond!" begin
- # make sure factorization preserves the bond index tags
- psi = randomCuMPS(sites)
- phi = psi[1] * psi[2]
- bondindtags = tags(linkind(psi, 1))
- replacebond!(psi, 1, phi)
- @test tags(linkind(psi, 1)) == bondindtags
-
- # check that replaceBond! updates llim_ and rlim_ properly
- orthogonalize!(psi, 5)
- phi = psi[5] * psi[6]
- replacebond!(psi, 5, phi; ortho="left")
- @test ITensors.leftlim(psi) == 5
- @test ITensors.rightlim(psi) == 7
-
- phi = psi[5] * psi[6]
- replacebond!(psi, 5, phi; ortho="right")
- @test ITensors.leftlim(psi) == 4
- @test ITensors.rightlim(psi) == 6
-
- psi.llim = 3
- psi.rlim = 7
- phi = psi[5] * psi[6]
- replacebond!(psi, 5, phi; ortho="left")
- @test ITensors.leftlim(psi) == 3
- @test ITensors.rightlim(psi) == 7
- end
-end
-
-# Helper function for making MPS
-function basicRandomCuMPS(N::Int; dim=4)
- sites = [Index(2, "Site") for n in 1:N]
- M = MPS(sites)
- links = [Index(dim, "n=$(n-1),Link") for n in 1:(N + 1)]
- M[1] = randomCuITensor(sites[1], links[2])
- for n in 2:(N - 1)
- M[n] = randomCuITensor(links[n], sites[n], links[n + 1])
- end
- M[N] = randomCuITensor(links[N], sites[N])
- M[1] /= sqrt(inner(M, M))
- return M
-end
-@testset "MPS gauging and truncation" begin
- N = 30
-
- @testset "orthogonalize! method" begin
- c = 12
- M = basicRandomCuMPS(N)
- orthogonalize!(M, c)
-
- @test ITensors.leftlim(M) == c - 1
- @test ITensors.rightlim(M) == c + 1
-
- # Test for left-orthogonality
- L = M[1] * prime(M[1], "Link")
- l = linkind(M, 1)
- @test cpu(L) ≈ delta(l, l') rtol = 1E-12
- for j in 2:(c - 1)
- L = L * M[j] * prime(M[j], "Link")
- l = linkind(M, j)
- @test cpu(L) ≈ delta(l, l') rtol = 1E-12
- end
-
- # Test for right-orthogonality
- R = M[N] * prime(M[N], "Link")
- r = linkind(M, N - 1)
- @test cpu(R) ≈ delta(r, r') rtol = 1E-12
- for j in reverse((c + 1):(N - 1))
- R = R * M[j] * prime(M[j], "Link")
- r = linkind(M, j - 1)
- @test cpu(R) ≈ delta(r, r') rtol = 1E-12
- end
-
- @test norm(M[c]) ≈ 1.0
- end
-
- @testset "truncate! method" begin
- M = basicRandomCuMPS(N; dim=10)
- M0 = copy(M)
- truncate!(M; maxdim=5)
- @test ITensors.rightlim(M) == 2
- # Test for right-orthogonality
- R = M[N] * prime(M[N], "Link")
- r = linkind(M, N - 1)
- @test cpu(R) ≈ delta(r, r') rtol = 1E-12
- for j in reverse(2:(N - 1))
- R = R * M[j] * prime(M[j], "Link")
- r = linkind(M, j - 1)
- @test cpu(R) ≈ delta(r, r') rtol = 1E-12
- end
- @test inner(M0, M) > 0.1
- end
-end
-
-#=@testset "Other MPS methods" begin
-
- @testset "sample! method" begin
- N = 10
- sites = [Index(3,"Site,n=$n") for n=1:N]
- psi = makeRandomCuMPS(sites,chi=3)
- nrm2 = inner(psi,psi)
- psi[1] *= (1.0/sqrt(nrm2))
-
- s = sample!(psi)
-
- @test length(s) == N
- for n=1:N
- @test 1 <= s[n] <= 3
- end
-
- # Throws becase not orthogonalized to site 1:
- orthogonalize!(psi,3)
- @test_throws ErrorException sample(psi)
-
- # Throws becase not normalized
- orthogonalize!(psi,1)
- psi[1] *= (5.0/norm(psi[1]))
- @test_throws ErrorException sample(psi)
-
- # Works when ortho & normalized:
- orthogonalize!(psi,1)
- psi[1] *= (1.0/norm(psi[1]))
- s = sample(psi)
- @test length(s) == N
- end
-
-end=#
diff --git a/ITensorGPU/test/test_cutruncate.jl b/ITensorGPU/test/test_cutruncate.jl
deleted file mode 100644
index 1afcfb92dd..0000000000
--- a/ITensorGPU/test/test_cutruncate.jl
+++ /dev/null
@@ -1,62 +0,0 @@
-using ITensors,
- ITensorGPU,
- LinearAlgebra, # For tr()
- CUDA,
- Test
-
-# gpu tests!
-@testset "cutrunctate" begin
- @testset for T in (Float32, Float64)
- @test ITensorGPU.truncate!(CUDA.zeros(T, 10)) == (zero(T), zero(T), CUDA.zeros(T, 1))
- trunc = ITensorGPU.truncate!(
- CuArray(T[1.0, 0.5, 0.4, 0.1, 0.05]); absoluteCutoff=true, cutoff=T(0.2)
- )
- @test trunc[1] ≈ T(0.15)
- @test trunc[2] ≈ T(0.25)
- @test Array(trunc[3]) == T[1.0, 0.5, 0.4]
-
- trunc = ITensorGPU.truncate!(
- CuArray(T[1.0, 0.5, 0.4, 0.1, 0.05]); absoluteCutoff=true, maxdim=3
- )
- @test trunc[1] ≈ T(0.15)
- @test trunc[2] ≈ T(0.25)
- @test Array(trunc[3]) == T[1.0, 0.5, 0.4]
-
- trunc = ITensorGPU.truncate!(
- CuArray(T[1.0, 0.5, 0.4, 0.1, 0.05]); absoluteCutoff=true, maxdim=3, cutoff=T(0.07)
- )
- @test trunc[1] ≈ T(0.15)
- @test trunc[2] ≈ T(0.25)
- @test Array(trunc[3]) == T[1.0, 0.5, 0.4]
-
- trunc = ITensorGPU.truncate!(
- CuArray(T[0.4, 0.26, 0.19, 0.1, 0.05]); relativeCutoff=true, cutoff=T(0.2)
- )
- @test trunc[1] ≈ T(0.15)
- @test trunc[2] ≈ T(0.145)
- @test Array(trunc[3]) == T[0.4, 0.26, 0.19]
-
- trunc = ITensorGPU.truncate!(
- CuArray(T[0.4, 0.26, 0.19, 0.1, 0.05]); relativeCutoff=true, maxdim=3
- )
- @test trunc[1] ≈ T(0.15)
- @test trunc[2] ≈ T(0.145)
- @test Array(trunc[3]) == T[0.4, 0.26, 0.19]
-
- trunc = ITensorGPU.truncate!(
- CuArray(T[0.4, 0.26, 0.19, 0.1, 0.05]); relativeCutoff=true, maxdim=3, cutoff=T(0.07)
- )
- @test trunc[1] ≈ T(0.15)
- @test trunc[2] ≈ T(0.145)
- @test Array(trunc[3]) == T[0.4, 0.26, 0.19]
-
- trunc = ITensorGPU.truncate!(
- CuArray(convert(Vector{T}, [0.4, 0.26, 0.19, 0.1, 0.05] / 2));
- relativeCutoff=true,
- cutoff=T(0.2),
- )
- @test trunc[1] ≈ T(0.15)
- @test trunc[2] ≈ T(0.145 / 2)
- @test Array(trunc[3]) == convert(Vector{T}, [0.4, 0.26, 0.19] / 2)
- end
-end # End truncate test
diff --git a/ITensorGPU/test/test_dmrg.jl b/ITensorGPU/test/test_dmrg.jl
deleted file mode 100644
index 979b8fd230..0000000000
--- a/ITensorGPU/test/test_dmrg.jl
+++ /dev/null
@@ -1,73 +0,0 @@
-using ITensorGPU, ITensors, Test, Random
-
-function heisenberg(n)
- opsum = OpSum()
- for j in 1:(n - 1)
- opsum += 0.5, "S+", j, "S-", j + 1
- opsum += 0.5, "S-", j, "S+", j + 1
- opsum += "Sz", j, "Sz", j + 1
- end
- return opsum
-end
-
-@testset "Basic DMRG" begin
- @testset "Spin-one Heisenberg" begin
- N = 10
- sites = siteinds("S=1", N)
- H = cuMPO(MPO(heisenberg(N), sites))
-
- psi = randomCuMPS(sites)
-
- sweeps = Sweeps(3)
- @test length(sweeps) == 3
- maxdim!(sweeps, 10, 20, 40)
- mindim!(sweeps, 1, 10)
- cutoff!(sweeps, 1E-11)
- noise!(sweeps, 1E-10)
- str = split(sprint(show, sweeps), '\n')
- @test length(str) > 1
- energy, psi = dmrg(H, psi, sweeps; outputlevel=0)
- @test energy < -12.0
- end
- @testset "Transverse field Ising" begin
- N = 32
- sites = siteinds("S=1/2", N)
- Random.seed!(432)
- psi0 = randomCuMPS(sites)
-
- opsum = OpSum()
- for j in 1:N
- j < N && add!(opsum, -1.0, "Sz", j, "Sz", j + 1)
- add!(opsum, -0.5, "Sx", j)
- end
- H = cuMPO(MPO(opsum, sites))
-
- sweeps = Sweeps(5)
- maxdim!(sweeps, 10, 20)
- cutoff!(sweeps, 1E-12)
- noise!(sweeps, 1E-10)
- energy, psi = dmrg(H, psi0, sweeps; outputlevel=0)
-
- # Exact energy for transverse field Ising model
- # with open boundary conditions at criticality
- energy_exact = 0.25 - 0.25 / sin(π / (2 * (2 * N + 1)))
- @test abs((energy - energy_exact) / energy_exact) < 1e-2
- end
- @testset "DMRGObserver" begin
- device = cu
- n = 5
- s = siteinds("S=1/2", n)
-
- H = device(MPO(heisenberg(n), s))
- ψ0 = device(randomMPS(s))
-
- dmrg_params = (; nsweeps=4, maxdim=10, cutoff=1e-8, noise=1e-8, outputlevel=0)
- observer = DMRGObserver(["Z"], s; energy_tol=1e-4, minsweeps=10)
- E, ψ = dmrg(H, ψ0; observer=observer, dmrg_params...)
- @test expect(ψ, "Z") ≈ observer.measurements["Z"][end] rtol =
- 10 * sqrt(eps(real(ITensors.scalartype(ψ0))))
- @test correlation_matrix(ψ, "Z", "Z") ≈ correlation_matrix(cpu(ψ), "Z", "Z")
- end
-end
-
-nothing
diff --git a/ITensorGPU/test/test_pastaq.jl b/ITensorGPU/test/test_pastaq.jl
deleted file mode 100644
index 5c740853ef..0000000000
--- a/ITensorGPU/test/test_pastaq.jl
+++ /dev/null
@@ -1,73 +0,0 @@
-using Test
-using ITensors
-using ITensorGPU
-using PastaQ
-using Zygote
-using OptimKit
-
-function ising_model(n; J=1.0, h)
- H = OpSum()
- for j in 1:n
- if j < n
- H -= J, "Z", j, "Z", j + 1
- end
- H -= h, "X", j
- end
- return H
-end
-
-Ry(θ) = [("Ry", j, (θ=θ[j],)) for j in 1:length(θ)]
-CNOT(n) = [("CNOT", j, j + 1) for j in 1:(n - 1)]
-function U(θ)
- nlayers = length(θ)
- Uθ = Tuple[]
- for l in 1:(nlayers - 1)
- Uθ = [Uθ; [Ry(θ[l]); CNOT(length(θ[l]))]]
- end
- Uθ = [Uθ; Ry(θ[nlayers])]
- return Uθ
-end
-
-function f(θ, ψ; kwargs...)
- i = siteinds(ψ)
- ψθ = runcircuit(i, U(θ); kwargs...)
- return 1 - abs(inner(ψ, ψθ))
-end
-
-function f_∇f(f, θ, ψ; kwargs...)
- fθ, (∇fθ,) = withgradient(θ -> f(θ, ψ; kwargs...), θ)
- return fθ, ∇fθ
-end
-
-@testset "PastaQ runcircuit on $device with element type $eltype" for device in (cpu, cu),
- eltype in (Float32, Float64)
-
- n = 4
- h = 0.5
- cutoff = 1e-5
- gradtol = 1e-4
- maxdim = 10
- maxiter = 30
- nlayers = 4
- i = siteinds("Qubit", n)
- ψ = device(eltype, MPS(i, j -> isodd(j) ? "0" : "1"))
- H = device(eltype, MPO(ising_model(n; h), i))
- _, ψ = dmrg(H, ψ; nsweeps=10, cutoff, maxdim, outputlevel=0)
- θ = [zeros(eltype, n) for l in 1:nlayers]
- (θ,) = optimize(
- θ -> f_∇f(f, θ, ψ; cutoff, maxdim, device, eltype),
- θ,
- LBFGS(; verbosity=0, maxiter, gradtol),
- )
- ψθ = runcircuit(i, U(θ); cutoff, maxdim, device, eltype)
- energy_reference = inner(ψ', H, ψ)
- energy_opt = inner(ψθ', H, ψθ)
- is_device(x, device) = device == cu ? ITensorGPU.is_cu(x) : !ITensorGPU.is_cu(x)
- @test is_device(H, device)
- @test is_device(ψ, device)
- @test is_device(ψθ, device)
- @test ITensors.scalartype(H) <: eltype
- @test ITensors.scalartype(ψ) <: eltype
- @test ITensors.scalartype(ψθ) <: eltype
- @test inner(ψ', H, ψ) ≈ inner(ψθ', H, ψθ) atol = 1e-3
-end
diff --git a/ITensorGaussianMPS/.JuliaFormatter.toml b/ITensorGaussianMPS/.JuliaFormatter.toml
deleted file mode 100644
index 08f664cdb9..0000000000
--- a/ITensorGaussianMPS/.JuliaFormatter.toml
+++ /dev/null
@@ -1,2 +0,0 @@
-style = "blue"
-indent = 2
diff --git a/ITensorGaussianMPS/.github/workflows/CompatHelper.yml b/ITensorGaussianMPS/.github/workflows/CompatHelper.yml
deleted file mode 100644
index cba9134c67..0000000000
--- a/ITensorGaussianMPS/.github/workflows/CompatHelper.yml
+++ /dev/null
@@ -1,16 +0,0 @@
-name: CompatHelper
-on:
- schedule:
- - cron: 0 0 * * *
- workflow_dispatch:
-jobs:
- CompatHelper:
- runs-on: ubuntu-latest
- steps:
- - name: Pkg.add("CompatHelper")
- run: julia -e 'using Pkg; Pkg.add("CompatHelper")'
- - name: CompatHelper.main()
- env:
- GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- COMPATHELPER_PRIV: ${{ secrets.DOCUMENTER_KEY }}
- run: julia -e 'using CompatHelper; CompatHelper.main()'
diff --git a/ITensorGaussianMPS/.github/workflows/TagBot.yml b/ITensorGaussianMPS/.github/workflows/TagBot.yml
deleted file mode 100644
index e72d645208..0000000000
--- a/ITensorGaussianMPS/.github/workflows/TagBot.yml
+++ /dev/null
@@ -1,13 +0,0 @@
-name: TagBot
-on:
- schedule:
- - cron: 0 0 * * *
- workflow_dispatch:
-jobs:
- TagBot:
- runs-on: ubuntu-latest
- steps:
- - uses: JuliaRegistries/TagBot@v1
- with:
- token: ${{ secrets.GITHUB_TOKEN }}
- ssh: ${{ secrets.DOCUMENTER_KEY }}
diff --git a/ITensorGaussianMPS/.github/workflows/test.yml b/ITensorGaussianMPS/.github/workflows/test.yml
deleted file mode 100644
index 71bef92420..0000000000
--- a/ITensorGaussianMPS/.github/workflows/test.yml
+++ /dev/null
@@ -1,31 +0,0 @@
-name: Tests
-on:
- push:
- branches:
- - main
- tags: '*'
- pull_request:
-jobs:
- test:
- name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }}
- runs-on: ${{ matrix.os }}
- strategy:
- matrix:
- version:
- - '1.3'
- - '1'
- os:
- - ubuntu-latest
- arch:
- - x64
- steps:
- - uses: actions/checkout@v2
- - uses: julia-actions/setup-julia@latest
- with:
- version: ${{ matrix.version }}
- arch: ${{ matrix.arch }}
- - uses: julia-actions/julia-buildpkg@latest
- - uses: julia-actions/julia-runtest@latest
- - uses: julia-actions/julia-uploadcodecov@latest
- env:
- CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
diff --git a/ITensorGaussianMPS/.gitignore b/ITensorGaussianMPS/.gitignore
deleted file mode 100644
index faa7bfd045..0000000000
--- a/ITensorGaussianMPS/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-Manifest.toml
-.*.swp
diff --git a/ITensorGaussianMPS/LICENSE b/ITensorGaussianMPS/LICENSE
deleted file mode 100644
index 0f0d66e4ad..0000000000
--- a/ITensorGaussianMPS/LICENSE
+++ /dev/null
@@ -1,21 +0,0 @@
-MIT License
-
-Copyright (c) 2020 Matthew Fishman and contributors
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/ITensorGaussianMPS/NEWS.md b/ITensorGaussianMPS/NEWS.md
deleted file mode 100644
index 9a08924a85..0000000000
--- a/ITensorGaussianMPS/NEWS.md
+++ /dev/null
@@ -1,44 +0,0 @@
-This file is a (mostly) comprehensive list of changes made in each release of ITensorGaussianMPS.jl. For a completely comprehensive but more verbose list, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main/ITensorGaussianMPS).
-
-While we are in v0.x of the package, we will follow the convention that updating from v0.x.y to v0.x.(y+1) (for example v0.1.15 to v0.1.16) should not break your code, unless you are using internal/undocumented features of the code, while updating from `v0.x.y` to `v0.(x+1).y` might break your code, though we will try to add deprecation warnings when possible, such as for simple cases where the name of a function changes.
-
-Note that as of Julia v1.5, in order to see deprecation warnings you will need to start Julia with `julia --depwarn=yes` (previously they were on by default). Please run your code like this before upgrading between minor versions of the code (for example from v0.1.41 to v0.2.0).
-
-After we release v1 of the package, we will start following [semantic versioning](https://semver.org).
-
-ITensors v0.0.4 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Update for new OpSum representation and interface (#920)
-- Add check for proper fermionic operators when making hopping Hamiltonian (#920)
-
-ITensors v0.0.3 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Add support for GMERA (#879)
-
-ITensors v0.0.2 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Bump to ITensors 0.3 (#880)
-
-ITensors v0.0.1 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Move ITensorGaussianMPS package into ITensors repository (#792)
diff --git a/ITensorGaussianMPS/Project.toml b/ITensorGaussianMPS/Project.toml
deleted file mode 100644
index 75b4b4abb8..0000000000
--- a/ITensorGaussianMPS/Project.toml
+++ /dev/null
@@ -1,15 +0,0 @@
-name = "ITensorGaussianMPS"
-uuid = "2be41995-7c9f-4653-b682-bfa4e7cebb93"
-authors = ["Matthew Fishman and contributors"]
-version = "0.1.6"
-
-[deps]
-Compat = "34da2185-b29b-5c13-b0c7-acf172513d20"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
-
-[compat]
-Compat = "3.40.0, 4"
-ITensors = "0.3.58, 0.4, 0.5"
-LinearAlgebra = "1.6"
-julia = "1.6"
diff --git a/ITensorGaussianMPS/README.md b/ITensorGaussianMPS/README.md
deleted file mode 100644
index 17588e246f..0000000000
--- a/ITensorGaussianMPS/README.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# ITensorGaussianMPS
-
-|**Citation** |**Open-access preprint** |
-|:-------------------------------------------------------------------------------:|:-----------------------------------------------------:|
-| [![DOI](http://img.shields.io/badge/PRB-10.1103/PhysRevB.92.075132-B31B1B.svg)](https://doi.org/10.1103/PhysRevB.92.075132) | [![arXiv](https://img.shields.io/badge/arXiv-1504.07701-b31b1b.svg)](https://arxiv.org/abs/1504.07701) |
-
-A package for creating the matrix product state of a free fermion (Gaussian) state.
-
-## Installation
-
-To install this package, first install Julia, start the Julia REPL by typing `julia` at your command line, and run the command:
-```julia
-julia>]
-
-pkg> add ITensorGaussianMPS
-```
-
-## Examples
-
-This can help create starting states for DMRG. For example:
-```julia
-using ITensors
-using ITensorGaussianMPS
-using LinearAlgebra
-
-# Half filling
-N = 20
-Nf = N÷2
-
-@show N, Nf
-
-# Hopping
-t = 1.0
-
-# Free fermion hopping Hamiltonian
-h = Hermitian(diagm(1 => fill(-t, N-1), -1 => fill(-t, N-1)))
-_, u = eigen(h)
-
-# Get the Slater determinant
-Φ = u[:, 1:Nf]
-
-# Create an mps for the free fermion ground state
-s = siteinds("Fermion", N; conserve_qns = true)
-ψ0 = slater_determinant_to_mps(s, Φ; maxblocksize = 4)
-
-# Make an interacting Hamiltonian
-U = 1.0
-@show U
-
-os = OpSum()
-for b in 1:N-1
- os .+= -t,"Cdag",b,"C",b+1
- os .+= -t,"Cdag",b+1,"C",b
-end
-for b in 1:N
- os .+= U, "Cdag*C", b
-end
-H = MPO(os, s)
-
-println("\nFree fermion starting energy")
-@show inner(ψ0, H, ψ0)
-
-# Random starting state
-ψr = randomMPS(s, n -> n ≤ Nf ? "1" : "0")
-
-println("\nRandom state starting energy")
-@show inner(ψr, H, ψr)
-
-println("\nRun dmrg with random starting state")
-@time dmrg(H, ψr; nsweeps=10, maxdim=[10, 20, 40, 60], cutoff=1e-12)
-
-println("\nRun dmrg with free fermion starting state")
-@time dmrg(H, ψ0; nsweeps=4, maxdim=60, cutoff=1e-12)
-```
-This will output something like:
-```julia
-(N, Nf) = (20, 10)
-U = 1.0
-
-Free fermion starting energy
-inner(ψ0, H, ψ0) = -2.3812770621299357
-
-Random state starting energy
-inner(ψr, H, ψr) = 10.0
-
-Run dmrg with random starting state
-After sweep 1 energy=6.261701784151 maxlinkdim=2 time=0.041
-After sweep 2 energy=2.844954346204 maxlinkdim=5 time=0.056
-After sweep 3 energy=0.245282430911 maxlinkdim=14 time=0.071
-After sweep 4 energy=-1.439072132586 maxlinkdim=32 time=0.098
-After sweep 5 energy=-2.220202191945 maxlinkdim=59 time=0.148
-After sweep 6 energy=-2.376787647893 maxlinkdim=60 time=0.186
-After sweep 7 energy=-2.381484153892 maxlinkdim=60 time=0.167
-After sweep 8 energy=-2.381489999291 maxlinkdim=57 time=0.233
-After sweep 9 energy=-2.381489999595 maxlinkdim=49 time=0.175
-After sweep 10 energy=-2.381489999595 maxlinkdim=49 time=0.172
- 1.349192 seconds (8.94 M allocations: 1.027 GiB, 18.05% gc time)
-
-Run dmrg with free fermion starting state
-After sweep 1 energy=-2.381489929965 maxlinkdim=49 time=0.139
-After sweep 2 energy=-2.381489999588 maxlinkdim=49 time=0.165
-After sweep 3 energy=-2.381489999594 maxlinkdim=48 time=0.161
-After sweep 4 energy=-2.381489999594 maxlinkdim=48 time=0.169
- 0.637021 seconds (4.59 M allocations: 525.989 MiB, 17.09% gc time)
-```
diff --git a/ITensorGaussianMPS/examples/Project.toml b/ITensorGaussianMPS/examples/Project.toml
deleted file mode 100644
index 8787f2a31b..0000000000
--- a/ITensorGaussianMPS/examples/Project.toml
+++ /dev/null
@@ -1,3 +0,0 @@
-[deps]
-ITensorGaussianMPS = "2be41995-7c9f-4653-b682-bfa4e7cebb93"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
diff --git a/ITensorGaussianMPS/examples/broken/hubbard_1d_no_spin_conservation.jl b/ITensorGaussianMPS/examples/broken/hubbard_1d_no_spin_conservation.jl
deleted file mode 100644
index c1bb8e3547..0000000000
--- a/ITensorGaussianMPS/examples/broken/hubbard_1d_no_spin_conservation.jl
+++ /dev/null
@@ -1,84 +0,0 @@
-using ITensors
-using ITensorGaussianMPS
-using LinearAlgebra
-
-# Electrons
-
-# Half filling
-N = 50
-Nf = N
-
-@show N, Nf
-
-# Maximum MPS link dimension
-_maxlinkdim = 200
-
-@show _maxlinkdim
-
-# DMRG cutoff
-_cutoff = 1e-8
-
-# Hopping
-t = 1.0
-
-# Electron-electron on-site interaction
-U = 1.0
-
-@show t, U
-
-# Make the free fermion Hamiltonian for the up spins
-os_up = OpSum()
-for n in 1:(N - 1)
- os_up .+= -t, "Cdagup", n, "Cup", n + 1
- os_up .+= -t, "Cdagup", n + 1, "Cup", n
-end
-
-# Make the free fermion Hamiltonian for the down spins
-os_dn = OpSum()
-for n in 1:(N - 1)
- os_dn .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_dn .+= -t, "Cdagdn", n + 1, "Cdn", n
-end
-
-# Hopping Hamiltonian with 2*N spinless fermions,
-# alternating up and down spins
-h = hopping_hamiltonian(os_up, os_dn)
-
-# Get the Slater determinant
-Φ = slater_determinant_matrix(h, Nf)
-
-# Create an MPS from the slater determinant.
-# In this example, we will turn off spin conservation (so this would
-# work with a Hamiltonian that mixed the up and down spin sectors)
-s = siteinds("Electron", N; conserve_qns=true, conserve_sz=false)
-println("Making free fermion starting MPS")
-@time ψ0 = slater_determinant_to_mps(
- s, Φ; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
-)
-@show maxlinkdim(ψ0)
-
-@show U
-os = os_up + os_dn
-for n in 1:N
- os .+= U, "Nupdn", n
-end
-H = MPO(os, s)
-
-# Random starting state
-ψr = randomMPS(s, n -> n ≤ Nf ? (isodd(n) ? "↑" : "↓") : "0")
-
-println("Random starting state energy")
-@show flux(ψr)
-@show inner(ψr', H, ψr)
-println()
-println("Free fermion starting state energy")
-@show flux(ψ0)
-@show inner(ψ0', H, ψ0)
-
-println("\nStart from product state")
-@time dmrg(H, ψr; nsweeps=10, maxdim=[10, 20, _maxlinkdim], cutoff=_cutoff)
-
-println("\nStart from free fermion state")
-@time dmrg(H, ψ0; nsweeps=5, maxdim=_maxlinkdim, cutoff=_cutoff)
-
-nothing
diff --git a/ITensorGaussianMPS/examples/broken/hubbard_2d_no_spin_conservation.jl b/ITensorGaussianMPS/examples/broken/hubbard_2d_no_spin_conservation.jl
deleted file mode 100644
index f69a551c5e..0000000000
--- a/ITensorGaussianMPS/examples/broken/hubbard_2d_no_spin_conservation.jl
+++ /dev/null
@@ -1,96 +0,0 @@
-using ITensors
-using ITensorGaussianMPS
-using LinearAlgebra
-
-# Electrons
-
-# Half filling
-Nx, Ny = 6, 3
-N = Nx * Ny
-Nf = N
-
-@show Nx, Ny
-@show N, Nf
-
-# Maximum MPS link dimension
-_maxlinkdim = 1_000
-
-@show _maxlinkdim
-
-# DMRG cutoff
-_cutoff = 1e-5
-
-# Hopping
-t = 1.0
-
-# Electron-electon on-site interaction
-U = 4.0
-
-@show t, U
-
-lattice = square_lattice(Nx, Ny; yperiodic=true)
-
-# Make the free fermion Hamiltonian for the up spins
-os_up = OpSum()
-for b in lattice
- os_up .+= -t, "Cdagup", b.s1, "Cup", b.s2
- os_up .+= -t, "Cdagup", b.s2, "Cup", b.s1
-end
-
-# Make the free fermion Hamiltonian for the down spins
-os_dn = OpSum()
-for b in lattice
- os_dn .+= -t, "Cdagdn", b.s1, "Cdn", b.s2
- os_dn .+= -t, "Cdagdn", b.s2, "Cdn", b.s1
-end
-
-# Hopping Hamiltonian with 2*N spinless fermions,
-# alternating up and down spins
-h = hopping_hamiltonian(os_up, os_dn)
-
-# Get the Slater determinant
-Φ = slater_determinant_matrix(h, Nf)
-
-println()
-println("Exact free fermion energy: ", tr(Φ'h * Φ))
-println()
-
-# Create an MPS from the slater determinant.
-# In this example we are turning of spin conservation.
-s = siteinds("Electron", N; conserve_qns=true, conserve_sz=false)
-println("Making free fermion starting MPS")
-@time ψ0 = slater_determinant_to_mps(
- s, Φ; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
-)
-@show maxlinkdim(ψ0)
-
-os = os_up + os_dn
-for n in 1:N
- os .+= U, "Nupdn", n
-end
-H = MPO(os, s)
-
-# Random starting state
-ψr = randomMPS(s, n -> n ≤ Nf ? (isodd(n) ? "↑" : "↓") : "0")
-
-println("\nRandom starting state energy")
-@show flux(ψr)
-@show inner(ψr, H, ψr)
-
-println("\nFree fermion MPS starting state energy")
-@show flux(ψ0)
-@show inner(ψ0, H, ψ0)
-
-println("\nStart from random product state")
-dmrg_kwargs = (;
- nsweeps=10,
- maxdim=[10, 20, 100, 200, _maxlinkdim],
- cutoff=_cutoff,
- noise=[1e-7, 1e-8, 1e-10, 0.0],
-)
-@time dmrg(H, ψr; dmrg_kwargs...)
-
-println("\nStart from free fermion state")
-@time dmrg(H, ψ0; nsweeps=10, maxdim=_maxlinkdim, cutoff=_cutoff)
-
-nothing
diff --git a/ITensorGaussianMPS/examples/hubbard_1d_spin_conservation.jl b/ITensorGaussianMPS/examples/hubbard_1d_spin_conservation.jl
deleted file mode 100644
index 460bdd4df9..0000000000
--- a/ITensorGaussianMPS/examples/hubbard_1d_spin_conservation.jl
+++ /dev/null
@@ -1,109 +0,0 @@
-using ITensors
-using ITensorGaussianMPS
-using LinearAlgebra
-
-# Electrons
-
-# Half filling
-N = 100
-Nf_up = N ÷ 2
-Nf_dn = N ÷ 2
-Nf = Nf_up + Nf_dn
-
-@show N, Nf
-
-# Maximum MPS link dimension
-_maxlinkdim = 200
-
-@show _maxlinkdim
-
-# DMRG cutoff
-_cutoff = 1e-8
-
-# Hopping
-t = 1.0
-
-# Electron-electron on-site interaction
-U = 1.0
-
-@show t, U
-
-# Make the free fermion Hamiltonian for the up spins
-os_up = OpSum()
-for n in 1:(N - 1)
- os_up .+= -t, "Cdagup", n, "Cup", n + 1
- os_up .+= -t, "Cdagup", n + 1, "Cup", n
-end
-
-# Make the free fermion Hamiltonian for the down spins
-os_dn = OpSum()
-for n in 1:(N - 1)
- os_dn .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_dn .+= -t, "Cdagdn", n + 1, "Cdn", n
-end
-
-# Hopping Hamiltonians for the up and down spins
-h_up = hopping_hamiltonian(os_up)
-h_dn = hopping_hamiltonian(os_dn)
-
-# Get the Slater determinant
-Φ_up = slater_determinant_matrix(h_up, Nf_up)
-Φ_dn = slater_determinant_matrix(h_dn, Nf_dn)
-
-# Create an MPS from the slater determinants.
-s = siteinds("Electron", N; conserve_qns=true)
-println("Making free fermion starting MPS")
-@time ψ0 = slater_determinant_to_mps(
- s, Φ_up, Φ_dn; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
-)
-@show maxlinkdim(ψ0)
-
-# The total non-interacting part of the Hamiltonian
-os_noninteracting = OpSum()
-for n in 1:(N - 1)
- os_noninteracting .+= -t, "Cdagup", n, "Cup", n + 1
- os_noninteracting .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_noninteracting .+= -t, "Cdagup", n + 1, "Cup", n
- os_noninteracting .+= -t, "Cdagdn", n + 1, "Cdn", n
-end
-
-H_noninteracting = MPO(os_noninteracting, s)
-@show inner(ψ0', H_noninteracting, ψ0)
-@show sum(diag(Φ_up' * h_up * Φ_up)) + sum(diag(Φ_dn' * h_dn * Φ_dn))
-
-# The total interacting Hamiltonian
-os_interacting = OpSum()
-for n in 1:(N - 1)
- os_interacting .+= -t, "Cdagup", n, "Cup", n + 1
- os_interacting .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_interacting .+= -t, "Cdagup", n + 1, "Cup", n
- os_interacting .+= -t, "Cdagdn", n + 1, "Cdn", n
-end
-for n in 1:N
- os_interacting .+= U, "Nupdn", n
-end
-H = MPO(os_interacting, s)
-#@show norm(prod(H) - prod(H_noninteracting))
-
-# Random starting state
-ψr = randomMPS(s, n -> n ≤ Nf ? (isodd(n) ? "↑" : "↓") : "0")
-
-println("Random starting state energy")
-@show flux(ψr)
-@show inner(ψr', H, ψr)
-println()
-println("Free fermion starting state energy")
-@show flux(ψ0)
-@show inner(ψ0', H, ψ0)
-
-println("\nStart from random product state")
-er, ψ̃r = @time dmrg(H, ψr; nsweeps=10, maxdim=[10, 20, _maxlinkdim], cutoff=_cutoff)
-@show er
-@show flux(ψ̃r)
-
-println("\nStart from free fermion state")
-e0, ψ̃0 = @time dmrg(H, ψ0; nsweeps=5, maxdim=_maxlinkdim, cutoff=_cutoff)
-@show e0
-@show flux(ψ̃0)
-
-nothing
diff --git a/ITensorGaussianMPS/examples/hubbard_2d_spin_conservation.jl b/ITensorGaussianMPS/examples/hubbard_2d_spin_conservation.jl
deleted file mode 100644
index 41274b5a43..0000000000
--- a/ITensorGaussianMPS/examples/hubbard_2d_spin_conservation.jl
+++ /dev/null
@@ -1,106 +0,0 @@
-using ITensors
-using ITensorGaussianMPS
-using LinearAlgebra
-
-# Electrons
-
-# Half filling
-Nx, Ny = 6, 3
-N = Nx * Ny
-Nf = N
-Nf_up = N ÷ 2
-Nf_dn = N - Nf_up
-
-@show Nx, Ny
-@show N, Nf
-
-# Maximum MPS link dimension
-_maxlinkdim = 1_000
-
-@show _maxlinkdim
-
-# DMRG cutoff
-_cutoff = 1e-5
-
-# Hopping
-t = 1.0
-
-# Electron-electon on-site interaction
-U = 4.0
-
-@show t, U
-
-lattice = square_lattice(Nx, Ny; yperiodic=true)
-
-# Make the free fermion Hamiltonian for the up spins
-os_up = OpSum()
-for b in lattice
- os_up .+= -t, "Cdagup", b.s1, "Cup", b.s2
- os_up .+= -t, "Cdagup", b.s2, "Cup", b.s1
-end
-
-# Make the free fermion Hamiltonian for the down spins
-os_dn = OpSum()
-for b in lattice
- os_dn .+= -t, "Cdagdn", b.s1, "Cdn", b.s2
- os_dn .+= -t, "Cdagdn", b.s2, "Cdn", b.s1
-end
-
-# Hopping Hamiltonian with 2*N spinless fermions,
-# alternating up and down spins
-h_up = hopping_hamiltonian(os_up)
-h_dn = hopping_hamiltonian(os_dn)
-
-# Get the Slater determinant
-Φ_up = slater_determinant_matrix(h_up, Nf_up)
-Φ_dn = slater_determinant_matrix(h_dn, Nf_dn)
-
-println()
-println("Exact free fermion energy: ", tr(Φ_up'h_up * Φ_up) + tr(Φ_dn'h_dn * Φ_dn))
-println()
-
-# Create an MPS from the slater determinant.
-# For now it only works without Sz conservation, this will be supported soon.
-s = siteinds("Electron", N; conserve_qns=true)
-println("Making free fermion starting MPS")
-@time ψ0 = slater_determinant_to_mps(
- s, Φ_up, Φ_dn; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
-)
-@show maxlinkdim(ψ0)
-
-os = OpSum()
-for b in lattice
- os .+= -t, "Cdagup", b.s1, "Cup", b.s2
- os .+= -t, "Cdagdn", b.s1, "Cdn", b.s2
- os .+= -t, "Cdagup", b.s2, "Cup", b.s1
- os .+= -t, "Cdagdn", b.s2, "Cdn", b.s1
-end
-for n in 1:N
- os .+= U, "Nupdn", n
-end
-H = MPO(os, s)
-
-# Random starting state
-ψr = randomMPS(s, n -> n ≤ Nf ? (isodd(n) ? "↑" : "↓") : "0")
-
-println("\nRandom starting state energy")
-@show flux(ψr)
-@show inner(ψr', H, ψr)
-
-println("\nFree fermion MPS starting state energy")
-@show flux(ψ0)
-@show inner(ψ0', H, ψ0)
-
-println("\nStart from random product state")
-dmrg_kwargs = (;
- nsweeps=10,
- maxdim=[10, 20, 100, 200, _maxlinkdim],
- cutoff=_cutoff,
- noise=[1e-7, 1e-8, 1e-10, 0.0],
-)
-@time dmrg(H, ψr; dmrg_kwargs...)
-
-println("\nStart from free fermion state")
-@time dmrg(H, ψ0; nsweeps=10, maxdim=_maxlinkdim, cutoff=_cutoff)
-
-nothing
diff --git a/ITensorGaussianMPS/examples/mps_to_determinants.jl b/ITensorGaussianMPS/examples/mps_to_determinants.jl
deleted file mode 100644
index c3590b4bc7..0000000000
--- a/ITensorGaussianMPS/examples/mps_to_determinants.jl
+++ /dev/null
@@ -1,81 +0,0 @@
-using ITensors
-using ITensorGaussianMPS
-using LinearAlgebra
-
-# Half filling
-N = 20
-Nf_up = N ÷ 2
-Nf_dn = N ÷ 2
-Nf = Nf_up + Nf_dn
-
-@show N, Nf
-
-# Maximum MPS link dimension
-_maxlinkdim = 50
-
-@show _maxlinkdim
-
-# DMRG cutoff
-_cutoff = 1e-8
-
-# Hopping
-t = 1.0
-
-# Electron-electron on-site interaction
-U = 1.0
-
-@show t, U
-
-# Make the free fermion Hamiltonian for the up spins
-os_up = OpSum()
-for n in 1:(N - 1)
- os_up .+= -t, "Cdagup", n, "Cup", n + 1
- os_up .+= -t, "Cdagup", n + 1, "Cup", n
-end
-
-# Make the free fermion Hamiltonian for the down spins
-os_dn = OpSum()
-for n in 1:(N - 1)
- os_dn .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_dn .+= -t, "Cdagdn", n + 1, "Cdn", n
-end
-
-# Hopping Hamiltonians for the up and down spins
-h_up = hopping_hamiltonian(os_up)
-h_dn = hopping_hamiltonian(os_dn)
-
-# Get the Slater determinant
-Φ_up = slater_determinant_matrix(h_up, Nf_up)
-Φ_dn = slater_determinant_matrix(h_dn, Nf_dn)
-
-# Create an MPS from the slater determinants.
-s = siteinds("Electron", N; conserve_qns=true)
-println("Making free fermion starting MPS")
-@time ψ0 = slater_determinant_to_mps(
- s, Φ_up, Φ_dn; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
-)
-@show maxlinkdim(ψ0)
-
-# The total interacting Hamiltonian
-os = os_up + os_dn
-for n in 1:N
- os .+= U, "Nupdn", n
-end
-H = MPO(os, s)
-
-println("Free fermion starting state energy")
-@show flux(ψ0)
-@show inner(ψ0', H, ψ0)
-
-println("\nStart from free fermion state")
-e, ψ = @time dmrg(H, ψ0; nsweeps=5, maxdim=_maxlinkdim, cutoff=_cutoff)
-@show e
-@show flux(ψ)
-
-using ITensorGaussianMPS: correlation_matrix_to_gmps, correlation_matrix_to_mps, entropy
-
-Λ_up = correlation_matrix(ψ, "Cdagup", "Cup")
-Λ_dn = correlation_matrix(ψ, "Cdagdn", "Cdn")
-ψ̃0 = correlation_matrix_to_mps(s, Λ_up, Λ_dn; eigval_cutoff=1e-2, maxblocksize=4)
-@show inner(ψ̃0, ψ)
-@show inner(ψ̃0', H, ψ̃0)
diff --git a/ITensorGaussianMPS/examples/spinless_fermion.jl b/ITensorGaussianMPS/examples/spinless_fermion.jl
deleted file mode 100644
index 7d2f74b9af..0000000000
--- a/ITensorGaussianMPS/examples/spinless_fermion.jl
+++ /dev/null
@@ -1,71 +0,0 @@
-using ITensors
-using ITensorGaussianMPS
-using LinearAlgebra
-
-# Half filling
-N = 50
-Nf = N ÷ 2
-
-@show N, Nf
-
-# Maximum MPS link dimension
-_maxlinkdim = 100
-
-@show _maxlinkdim
-
-# DMRG cutoff
-_cutoff = 1e-12
-
-# Hopping
-t = 1.0
-
-# Electron-electron on-site interaction
-U = 1.0
-
-@show t, U
-
-# Free fermion Hamiltonian
-os = OpSum()
-for n in 1:(N - 1)
- os .+= -t, "Cdag", n, "C", n + 1
- os .+= -t, "Cdag", n + 1, "C", n
-end
-
-# Hopping Hamiltonian with N spinless fermions
-h = hopping_hamiltonian(os)
-
-# Get the Slater determinant
-Φ = slater_determinant_matrix(h, Nf)
-
-# Create an mps for the free fermion ground state
-s = siteinds("Fermion", N; conserve_qns=true)
-println("Making free fermion starting MPS")
-@time ψ0 = slater_determinant_to_mps(
- s, Φ; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
-)
-@show maxlinkdim(ψ0)
-
-# Make an interacting Hamiltonian
-for n in 1:(N - 1)
- os .+= U, "N", n, "N", n + 1
-end
-H = MPO(os, s)
-
-# Random starting state
-ψr = randomMPS(s, n -> n ≤ Nf ? "1" : "0")
-
-println("\nRandom state starting energy")
-@show flux(ψr)
-@show inner(ψr', H, ψr)
-
-println("\nFree fermion starting energy")
-@show flux(ψ0)
-@show inner(ψ0', H, ψ0)
-
-println("\nRun dmrg with random starting state")
-@time dmrg(H, ψr; nsweeps=20, maxdim=[10, 20, 40, _maxlinkdim], cutoff=_cutoff)
-
-println("\nRun dmrg with free fermion starting state")
-@time dmrg(H, ψ0; nsweeps=4, maxdim=_maxlinkdim, cutoff=_cutoff)
-
-nothing
diff --git a/ITensorGaussianMPS/examples/spinless_fermion_pairing.jl b/ITensorGaussianMPS/examples/spinless_fermion_pairing.jl
deleted file mode 100644
index 145b042e41..0000000000
--- a/ITensorGaussianMPS/examples/spinless_fermion_pairing.jl
+++ /dev/null
@@ -1,84 +0,0 @@
-# This script shows a minimal example of the GMPS-MPS conversion
-# of the ground state of quadratic fermionic Hamiltonian with pairing terms.
-using LinearAlgebra
-using ITensors
-using ITensorGaussianMPS
-
-ITensors.disable_contraction_sequence_optimization()
-let
- N = 8
- sites = siteinds("Fermion", N; conserve_qns=false, conserve_nfparity=true)
- _maxlinkdim = 100
- # DMRG cutoff
- _cutoff = 1e-13
- # Hopping
- t = -1.0
- # Electron-electron on-site interaction
- U = 0.0
- # Pairing
- Delta = 1.00
- @show t, U, Delta
- # Free fermion Hamiltonian
- os_h = OpSum()
- for n in 1:(N - 1)
- os_h .+= -t, "Cdag", n, "C", n + 1
- os_h .+= -t, "Cdag", n + 1, "C", n
- end
- os_p = OpSum()
- for n in 1:(N - 1)
- os_p .+= Delta / 2.0, "Cdag", n, "Cdag", n + 1
- os_p .+= -Delta / 2.0, "Cdag", n + 1, "Cdag", n
- os_p .+= -Delta / 2.0, "C", n, "C", n + 1
- os_p .+= Delta / 2.0, "C", n + 1, "C", n
- end
- os = os_h + os_p
- h = quadratic_hamiltonian(os)
- hb = ITensorGaussianMPS.reverse_interleave(h)
- # Make MPO from free fermion Hamiltonian in blocked format
- os_new = OpSum()
- for i in 1:N
- for j in 1:N
- if abs(hb[i, j]) > 1e-8
- os_new .+= -t, "Cdag", i, "C", j
- os_new .+= t, "C", i, "Cdag", j
- os_new .+= Delta / 2.0 * sign(i - j), "C", i, "C", j
- os_new .+= -Delta / 2.0 * sign(i - j), "Cdag", i, "Cdag", j
- end
- end
- end
- H = ITensors.MPO(os_h + os_p, sites)
-
- #Get Ground state
- @assert ishermitian(h)
- e = eigvals(Hermitian(h))
- @show e
- E, V = eigen_gaussian(h)
- @show sum(E[1:N])
- Φ = V[:, 1:N]
- c = real.(conj(Φ) * transpose(Φ))
-
- #Get (G)MPS
- psi = ITensorGaussianMPS.correlation_matrix_to_mps(
- sites, c; eigval_cutoff=1e-10, maxblocksize=14, cutoff=1e-11
- )
- @show eltype(psi[1])
- cdagc = correlation_matrix(psi, "C", "Cdag")
- cc = correlation_matrix(psi, "C", "C")
-
- println("\nFree fermion starting energy")
- @show flux(psi)
- @show inner(psi', H, psi)
- println("\nRun dmrg with GMPS starting state")
- _, psidmrg = dmrg(H, psi; nsweeps=12, maxdim=[10, 20, 40, _maxlinkdim], cutoff=_cutoff)
- cdagc_dmrg = correlation_matrix(psidmrg, "C", "Cdag")
- cc_dmrg = correlation_matrix(psidmrg, "C", "C")
-
- @show norm(cdagc_dmrg - cdagc)
- @show norm(cc_dmrg - cc)
-
- @show inner(psidmrg', H, psidmrg)
- @show(abs(inner(psidmrg, psi)))
-
- #return
-end
-nothing
diff --git a/ITensorGaussianMPS/src/ITensorGaussianMPS.jl b/ITensorGaussianMPS/src/ITensorGaussianMPS.jl
deleted file mode 100644
index 3bc0b920fa..0000000000
--- a/ITensorGaussianMPS/src/ITensorGaussianMPS.jl
+++ /dev/null
@@ -1,25 +0,0 @@
-module ITensorGaussianMPS
-
-using Compat
-using ITensors
-using ITensors.NDTensors
-using LinearAlgebra
-import LinearAlgebra: Givens
-
-export slater_determinant_to_mps,
- correlation_matrix_to_mps,
- slater_determinant_to_gmps,
- correlation_matrix_to_gmps,
- hopping_hamiltonian,
- hopping_operator,
- quadratic_hamiltonian,
- quadratic_operator,
- slater_determinant_matrix,
- slater_determinant_to_gmera,
- eigen_gaussian
-
-include("gmps.jl")
-include("gmera.jl")
-include("linalg.jl")
-
-end
diff --git a/ITensorGaussianMPS/src/gmera.jl b/ITensorGaussianMPS/src/gmera.jl
deleted file mode 100644
index 2a84663321..0000000000
--- a/ITensorGaussianMPS/src/gmera.jl
+++ /dev/null
@@ -1,183 +0,0 @@
-# brick wall scanning for a single MERA layer with treatment to the tail
-function correlation_matrix_to_gmps_brickwall_tailed(
- Λ0::AbstractMatrix{ElT},
- inds::Vector{Int};
- eigval_cutoff::Float64=1e-8,
- maxblocksize::Int=size(Λ0, 1),
-) where {ElT<:Number}
- Λ = Hermitian(Λ0)
- N = size(Λ, 1)
- V = Circuit{ElT}([])
- #ns = Vector{real(ElT)}(undef, 2*N)
- err_tot = 0.0
- indsnext = Int[]
- relinds = Int[]
- for i in 1:N
- if i % 2 == 0
- append!(indsnext, inds[i])
- append!(relinds, i)
- continue
- end
- blocksize = 0
- n = 0.0
- err = 0.0
- p = Int[]
- uB = 0.0
- # find the block whose lowest eigenvalue is within torelence
- for blocksize in 1:maxblocksize
- j = min(i + blocksize, N)
- ΛB = deepcopy(Λ[i:j, i:j]) #@view Λ[i:j, i:j] # \LambdaB is still part of Lambda
- nB, uB = eigen(Hermitian(ΛB))
- # sort by -(n * log(n) + (1 - n) * log(1 - n)) in ascending order
- p = sortperm(nB; by=entropy)
- n = nB[p[1]]
- err = min(n, 1 - n)
- err ≤ eigval_cutoff && break
- end
- # keep the node if the err cannot be reduced
- if i + maxblocksize >= N && err > eigval_cutoff
- append!(indsnext, inds[i])
- append!(relinds, i)
- continue
- end
- err_tot += err
- #ns[i] = n # eigenvalue
- v = deepcopy(uB[:, p[1]]) #@view uB[:, p[1]] # eigenvector of the correlation matrix
- g, _ = givens_rotations(v) # convert eigenvector into givens rotation
- shift!(g, i - 1) # shift rotation location
- # In-place version of:
- # V = g * V
- lmul!(g, V)
- #@show g
- Λ = Hermitian(g * Λ * g') #isolate current site i
- end
- return Λ, V, indsnext, relinds
-end
-
-# shift givens rotation indexes according to the inds
-function shiftByInds!(G::Circuit, inds::Vector{Int})
- for (n, g) in enumerate(G.rotations)
- G.rotations[n] = Givens(inds[g.i1], inds[g.i2], g.c, g.s)
- end
- return G
-end
-
-"""
- correlation_matrix_to_gmera(Λ::AbstractMatrix{ElT}; eigval_cutoff::Float64 = 1e-8, maxblocksize::Int = size(Λ0, 1))
-
-Diagonalize a correlation matrix through MERA layers,
-output gates and eigenvalues of the correlation matrix
-"""
-# Combine gates for each MERA layer
-function correlation_matrix_to_gmera(
- Λ0::AbstractMatrix{ElT}; eigval_cutoff::Float64=1e-8, maxblocksize::Int=size(Λ0, 1)
-) where {ElT<:Number}
- Λ = Hermitian(Λ0)
- N = size(Λ, 1)
- Nnew = N - 1
- inds = collect(1:N)
- V = Circuit{ElT}([])
- Λtemp = deepcopy(Λ)
- layer = 0 # layer label of MERA
- while N > Nnew # conditioned on the reduction of nodes
- N = Nnew
- # indsnext: next layer indexes with original matrix labels
- # relinds: next layer indexes with labels from the last layer
- Λr, C, indsnext, relinds = correlation_matrix_to_gmps_brickwall_tailed(
- Λtemp, inds; eigval_cutoff=eigval_cutoff, maxblocksize=maxblocksize
- )
- shiftByInds!(C, inds) # shift the index back to the original matrix
- inds = indsnext
- Λtemp = deepcopy(Λr[relinds, relinds]) # project to even site for next layer based on keeping indexes relinds
- Nnew = size(Λtemp, 1)
- lmul!(C, V) # add vector of givens rotation C into the larger vector V
- #V = C * V
- layer += 1
- #Λ = ITensors.Hermitian(C * Λ * C')
- end
- # gmps for the final layer
- Λr, C = correlation_matrix_to_gmps(
- Λtemp; eigval_cutoff=eigval_cutoff, maxblocksize=maxblocksize
- )
- shiftByInds!(C, inds)
- lmul!(C, V)
- Λ = V * Λ0 * V'
- ns = real.(diag(Λ))
- return ns, V
-end
-
-# output the MERA gates and eigenvalues of correlation matrix from WF
-function slater_determinant_to_gmera(Φ::AbstractMatrix; kwargs...)
- return correlation_matrix_to_gmera(conj(Φ) * transpose(Φ); kwargs...)
-end
-
-# ouput the MPS based on the MERA gates
-function correlation_matrix_to_mera(
- s::Vector{<:Index},
- Λ::AbstractMatrix;
- eigval_cutoff::Float64=1e-8,
- maxblocksize::Int=size(Λ, 1),
- kwargs...,
-)
- @assert size(Λ, 1) == size(Λ, 2)
- ns, C = correlation_matrix_to_gmera(
- Λ; eigval_cutoff=eigval_cutoff, maxblocksize=maxblocksize
- )
- if all(hastags("Fermion"), s)
- U = [ITensor(s, g) for g in reverse(C.rotations)]
- ψ = MPS(s, n -> round(Int, ns[n]) + 1, U; kwargs...)
- elseif all(hastags("Electron"), s)
- isodd(length(s)) && error(
- "For Electron type, must have even number of sites of alternating up and down spins.",
- )
- N = length(s)
- if isspinful(s)
- error(
- "correlation_matrix_to_mps(Λ::AbstractMatrix) currently only supports spinless Fermions or Electrons that do not conserve Sz. Use correlation_matrix_to_mps(Λ_up::AbstractMatrix, Λ_dn::AbstractMatrix) to use spinful Fermions/Electrons.",
- )
- else
- sf = siteinds("Fermion", 2 * N; conserve_qns=true)
- end
- U = [ITensor(sf, g) for g in reverse(C.rotations)]
- ψf = MPS(sf, n -> round(Int, ns[n]) + 1, U; kwargs...)
- ψ = MPS(N)
- for n in 1:N
- i, j = 2 * n - 1, 2 * n
- C = combiner(sf[i], sf[j])
- c = combinedind(C)
- ψ[n] = ψf[i] * ψf[j] * C
- ψ[n] *= δ(dag(c), s[n])
- end
- else
- error("All sites must be Fermion or Electron type.")
- end
- return ψ
-end
-
-function slater_determinant_to_mera(s::Vector{<:Index}, Φ::AbstractMatrix; kwargs...)
- return correlation_matrix_to_mera(s, conj(Φ) * transpose(Φ); kwargs...)
-end
-
-# G the circuit from the gates, N is the total number of sites
-function UmatFromGates(G::Circuit, N::Int)
- U = Matrix{Float64}(I, N, N)
- n = size(G.rotations, 1)
- for k in 1:n
- rot = G.rotations[k]
- U = rot * U
- end
- return U
-end
-
-# compute the energy of the state based on the gates
-function EfromGates(H::Matrix{<:Number}, U::Matrix{<:Number})
- Htemp = U * H * U'
- Etot = 0
- N = size(U, 1)
- for i in 1:N
- if Htemp[i, i] < 0.0
- Etot += Htemp[i, i]
- end
- end
- return Etot
-end
diff --git a/ITensorGaussianMPS/src/gmps.jl b/ITensorGaussianMPS/src/gmps.jl
deleted file mode 100644
index 6d2e3a8e96..0000000000
--- a/ITensorGaussianMPS/src/gmps.jl
+++ /dev/null
@@ -1,1073 +0,0 @@
-import Base: sortperm, size, length, eltype, conj, transpose, copy, *
-using ITensors: alias
-using ITensors.ITensorMPS: ITensorMPS
-abstract type AbstractSymmetry end
-struct ConservesNfParity{T} <: AbstractSymmetry
- data::T
-end
-
-struct ConservesNf{T} <: AbstractSymmetry
- data::T
-end
-#
-# Single particle von Neumann entanglement entropy
-#
-function entropy(n::Number)
- (n ≤ 0 || n ≥ 1) && return 0
- return -(n * log(n) + (1 - n) * log(1 - n))
-end
-
-entropy(ns::Vector{Float64}) = sum(entropy, ns)
-
-#
-# Linear Algebra tools
-#
-
-"""
- frobenius_distance(M1::AbstractMatrix, M2::AbstractMatrix)
-
-Computes the Frobenius distance `√tr((M1-M2)'*(M1-M2))`.
-"""
-function frobenius_distance(M1::AbstractMatrix, M2::AbstractMatrix)
- return sqrt(abs(tr(M1'M1) + tr(M2'M2) - tr(M1'M2) - tr(M2'M1)))
-end
-
-#
-# Rotations
-#
-
-struct Circuit{T} <: LinearAlgebra.AbstractRotation{T}
- rotations::Vector{Givens{T}}
-end
-
-Base.adjoint(R::Circuit) = Adjoint(R)
-
-function Base.show(io::IO, ::MIME"text/plain", C::Circuit{T}) where {T}
- print(io, "Circuit{$T}:\n")
- return show(io, "text/plain", C.rotations)
-end
-
-function Base.copy(aR::Adjoint{<:Any,Circuit{T}}) where {T}
- return Circuit{T}(reverse!([r' for r in aR.parent.rotations]))
-end
-
-function LinearAlgebra.lmul!(G::Givens, R::Circuit)
- push!(R.rotations, G)
- return R
-end
-
-function LinearAlgebra.lmul!(R::Circuit, A::AbstractArray)
- @inbounds for i in 1:length(R.rotations)
- lmul!(R.rotations[i], A)
- end
- return A
-end
-
-function LinearAlgebra.rmul!(A::AbstractMatrix, adjR::Adjoint{<:Any,<:Circuit})
- R = adjR.parent
- @inbounds for i in 1:length(R.rotations)
- rmul!(A, adjoint(R.rotations[i]))
- end
- return A
-end
-
-Base.:*(g1::Circuit, g2::Circuit) = Circuit(vcat(g2.rotations, g1.rotations))
-LinearAlgebra.lmul!(g1::Circuit, g2::Circuit) = append!(g2.rotations, g1.rotations)
-
-Base.:*(A::Circuit, B::Union{<:Hermitian,<:Diagonal}) = A * convert(Matrix, B)
-Base.:*(A::Adjoint{<:Any,<:Circuit}, B::Hermitian) = copy(A) * convert(Matrix, B)
-Base.:*(A::Adjoint{<:Any,<:Circuit}, B::Diagonal) = copy(A) * convert(Matrix, B)
-function Base.:*(A::Adjoint{<:Any,<:AbstractVector}, B::Adjoint{<:Any,<:Circuit})
- return convert(Matrix, A) * B
-end
-
-function LinearAlgebra.rmul!(A::AbstractMatrix, R::Circuit)
- @inbounds for i in reverse(1:length(R.rotations))
- rmul!(A, R.rotations[i])
- end
- return A
-end
-
-function Base.:*(A::AbstractMatrix, B::Adjoint{<:Any,<:Circuit})
- AB = copy(A)
- rmul!(AB, B)
- return AB
-end
-
-function replace!(f, G::Circuit)
- for i in eachindex(G.rotations)
- G.rotations[i] = f(G.rotations[i])
- end
- return G
-end
-
-function replace_indices!(f, G::Circuit)
- return replace!(g -> Givens(f(g.i1), f(g.i2), g.c, g.s), G)
-end
-
-function shift!(G::Circuit, i::Int)
- return replace_indices!(j -> j + i, G)
-end
-
-function scale!(G::Circuit, i::Int)
- return replace_indices!(j -> j * i, G)
-end
-
-function conj!(G::Circuit)
- return replace!(g -> Givens(g.i1, g.i2, g.c, g.s'), G)
-end
-
-ngates(G::Circuit) = length(G.rotations)
-
-#
-# Free fermion tools
-#
-
-is_creation_operator(o::Op) = is_creation_operator(ITensors.name(o))
-is_creation_operator(o::String) = is_creation_operator(OpName(o))
-is_creation_operator(::OpName) = false
-is_creation_operator(::OpName"Cdag") = true
-is_creation_operator(::OpName"Cdagup") = true
-is_creation_operator(::OpName"Cdagdn") = true
-is_creation_operator(::OpName"c†") = true
-is_creation_operator(::OpName"c†↑") = true
-is_creation_operator(::OpName"c†↓") = true
-
-is_annihilation_operator(o::Op) = is_annihilation_operator(ITensors.name(o))
-is_annihilation_operator(o::String) = is_annihilation_operator(OpName(o))
-is_annihilation_operator(::OpName) = false
-is_annihilation_operator(::OpName"C") = true
-is_annihilation_operator(::OpName"Cup") = true
-is_annihilation_operator(::OpName"Cdn") = true
-is_annihilation_operator(::OpName"c") = true
-is_annihilation_operator(::OpName"c↑") = true
-is_annihilation_operator(::OpName"c↓") = true
-
-expand_to_ladder_operators(o::Op) = expand_to_ladder_operators(ITensors.name(o))
-expand_to_ladder_operators(o::String) = expand_to_ladder_operators(OpName(o))
-expand_to_ladder_operators(opname::OpName) = opname # By default does nothing
-expand_to_ladder_operators(::OpName"N") = ["Cdag", "C"]
-expand_to_ladder_operators(::OpName"Nup") = ["Cdagup", "Cup"]
-expand_to_ladder_operators(::OpName"Ndn") = ["Cdagdn", "Cdn"]
-expand_to_ladder_operators(opname::OpName"n↑") = expand_to_ladder_operators(alias(opname))
-expand_to_ladder_operators(opname::OpName"n↓") = expand_to_ladder_operators(alias(opname))
-
-#interlaced_hamiltonian(h::AbstractMatrix) = h
-#blocked_hamiltonian(h::AbstractMatrix) = Hermitian(reverse_interleave(Matrix(h)))
-
-function quadrant(term)
- if is_creation_operator(term[1]) && is_annihilation_operator(term[2])
- q = (2, 2)
- elseif is_annihilation_operator(term[1]) && is_creation_operator(term[2])
- q = (1, 1)
- elseif is_annihilation_operator(term[1]) && is_annihilation_operator(term[2])
- q = (1, 2)
- elseif is_creation_operator(term[1]) && is_creation_operator(term[2])
- q = (2, 1)
- else
- error("Unknown quadratic hopping term: $term")
- end
- return q
-end
-
-function single_to_quadratic(term)
- site = ITensors.site(term[1])
- new_ops = expand_to_ladder_operators(term[1])
- return coefficient(term) * Op(new_ops[1], site) * Op(new_ops[2], site)
-end
-
-function quadratic_operator(os::OpSum)
- os = deepcopy(os)
- #os = ITensorMPS.sorteachterm(os, sites)
- os = ITensorMPS.sortmergeterms(os)
-
- nterms = length(os)
- coefs = Vector{Number}(undef, nterms)
- sites = Vector{Tuple{Int,Int}}(undef, nterms)
- quads = Vector{Tuple{Int,Int}}(undef, nterms)
- nsites = 0
- # detect terms and size of lattice
- for n in 1:nterms
- term = os[n]
- #@show term
- #@show term.coef
- coef = isreal(coefficient(term)) ? real(coefficient(term)) : coefficient(term)
- coefs[n] = coef
- term = (length(term) == 1) ? single_to_quadratic(term) : term
- length(term) ≠ 2 && error("Must create hopping Hamiltonian from quadratic Hamiltonian")
- quads[n] = quadrant(term)
- sites[n] = ntuple(n -> ITensors.site(term[n]), Val(2))
- nsites = max(nsites, maximum(sites[n]))
- end
- # detect coefficient type
- coef_type = mapreduce(typeof, promote_type, coefs)
- ElT = isreal(coefs) ? real(coef_type) : coef_type
- # fill Hamiltonian matrix with elements
- h = zeros(ElT, 2 * nsites, 2 * nsites)
- other_quad = i -> i == 2 ? 1 : 2
- for n in 1:nterms
- quad = quads[n]
- offsets = nsites .* (quad .- 1)
- if quad[1] != quad[2]
- h[(sites[n] .+ offsets)...] += coefs[n]
- else
- h[(sites[n] .+ offsets)...] += 0.5 * coefs[n]
- other_offsets = nsites .* (other_quad.(quad) .- 1)
- h[(sites[n] .+ other_offsets)...] += -0.5 * conj(coefs[n])
- end
- end
- return interleave(h)
-end
-
-function quadratic_operator(os_up::OpSum, os_dn::OpSum)
- h_up = quadratic_operator(os_up)
- h_dn = quadratic_operator(os_dn)
- @assert size(h_up) == size(h_dn)
- N = size(h_up, 1)
- h = zeros(eltype(h_up), (2 * N, 2 * N))
- n = div(N, 2)
- # interlace the blocks of both quadratic hamiltonians
- h_up = reverse_interleave(Matrix(h_up))
- h_dn = reverse_interleave(Matrix(h_dn))
- # super-quadrant (1,1)
- h[1:2:N, 1:2:N] = h_up[1:n, 1:n]
- h[2:2:N, 2:2:N] = h_dn[1:n, 1:n]
- # super-quadrant (2,1)
- h[(N + 1):2:(2 * N), 1:2:N] = h_up[(n + 1):(2 * n), 1:n]
- h[(N + 2):2:(2 * N), 2:2:N] = h_dn[(n + 1):(2 * n), 1:n]
- # super-quadrant (2,2)
- h[(N + 1):2:(2 * N), (N + 1):2:(2 * N)] = h_up[(n + 1):N, (n + 1):N]
- h[(N + 2):2:(2 * N), (N + 2):2:(2 * N)] = h_dn[(n + 1):N, (n + 1):N]
- # super-quadrant (1,2)
- h[1:2:N, (N + 1):2:(2 * N)] = h_up[1:n, (n + 1):(2 * n)]
- h[2:2:N, (N + 2):2:(2 * N)] = h_dn[1:n, (n + 1):(2 * n)]
- #convert from blocked to interlaced format. Odd base-rows are spin-up, even are spin-down.
- return interleave(h)
-end
-
-quadratic_hamiltonian(os::OpSum) = Hermitian(quadratic_operator(os))
-function quadratic_hamiltonian(os_up::OpSum, os_dn::OpSum)
- return Hermitian(quadratic_operator(os_up, os_dn))
-end
-
-function hopping_operator(os::OpSum; drop_pairing_terms_tol=nothing)
- # convert to blocked format
- h = reverse_interleave(Matrix(quadratic_hamiltonian(os)))
- # check that offdiagonal blocks are 0
- N = div(size(h, 1), 2)
- if isnothing(drop_pairing_terms_tol)
- drop_pairing_terms_tol = eps(real(eltype(h)))
- end
- if !all(abs.(h[1:N, (N + 1):(2 * N)]) .< drop_pairing_terms_tol)
- error("Trying to convert hamiltonian with pairing terms to hopping hamiltonian!")
- end
- return 2 .* h[(N + 1):(2 * N), (N + 1):(2 * N)]
-end
-
-# Make a combined hopping Hamiltonian for spin up and down
-function hopping_operator(os_up::OpSum, os_dn::OpSum; drop_pairing_terms_tol=nothing)
- # convert to blocked format
- h = reverse_interleave(Matrix(quadratic_hamiltonian(os_up, os_dn)))
- # check that offdiagonal blocks are 0
- N = div(size(h, 1), 2)
- if isnothing(drop_pairing_terms_tol)
- drop_pairing_terms_tol = eps(real(eltype(h)))
- end
- if !all(abs.(h[1:N, (N + 1):(2 * N)]) .< drop_pairing_terms_tol)
- error("Trying to convert hamiltonian with pairing terms to hopping hamiltonian!")
- end
- return 2 .* h[(N + 1):(2 * N), (N + 1):(2 * N)]
-end
-
-function hopping_hamiltonian(os::OpSum; drop_pairing_terms_tol=nothing)
- return Hermitian(hopping_operator(os; drop_pairing_terms_tol))
-end
-function hopping_hamiltonian(os_up::OpSum, os_dn::OpSum; drop_pairing_terms_tol=nothing)
- return Hermitian(hopping_operator(os_up, os_dn; drop_pairing_terms_tol))
-end
-
-function slater_determinant_matrix(h::AbstractMatrix, Nf::Int)
- _, u = eigen(h)
- return u[:, 1:Nf]
-end
-
-#
-# Correlation matrix diagonalization
-#
-
-struct Boguliobov
- u::Givens
-end
-
-set_data(::ConservesNf, x) = ConservesNf(x)
-set_data(::ConservesNfParity, x) = ConservesNfParity(x)
-site_stride(::ConservesNf) = 1
-site_stride(::ConservesNfParity) = 2
-copy(A::T) where {T<:AbstractSymmetry} = T(copy(A.data))
-size(A::T) where {T<:AbstractSymmetry} = size(A.data)
-size(A::T, dim::Int) where {T<:AbstractSymmetry} = size(A.data, dim)
-
-length(A::T) where {T<:AbstractSymmetry} = length(A.data)
-eltype(A::T) where {T<:AbstractSymmetry} = eltype(A.data)
-Hermitian(A::T) where {T<:AbstractSymmetry} = set_data(A, Hermitian(A.data))
-conj(A::T) where {T<:AbstractSymmetry} = set_data(A, conj(A.data))
-transpose(A::T) where {T<:AbstractSymmetry} = set_data(A, transpose(A.data))
-
-"""
- givens_rotations(v::AbstractVector)
-
-For a vector `v`, return the `length(v)-1`
-Givens rotations `g` and the norm `r` such that:
-
-```julia
-g * v ≈ r * [n == 1 ? 1 : 0 for n in 1:length(v)]
-```
-"""
-function givens_rotations(v::AbstractVector{ElT}) where {ElT}
- N = length(v)
- gs = Circuit{ElT}([])
- r = v[1]
- for n in reverse(1:(N - 1))
- g, r = givens(v, n, n + 1)
- v = g * v
- lmul!(g, gs)
- end
- return gs, r
-end
-
-givens_rotations(v::ConservesNf) = return givens_rotations(v.data)
-
-"""
- givens_rotations(_v0::ConservesNfParity)
-
- For a vector
- ```julia
- v=_v0.data
- ```
- from a fermionic Gaussian state, return the `4*length(v)-1`
- real Givens/Boguliobov rotations `g` and the norm `r` such that:
- ```julia
- g * v ≈ r * [n == 2 ? 1 : 0 for n in 1:length(v)]
- c
- with `g` being composed of diagonal rotation aligning pairs
- of complex numbers in the complex plane, and Givens/Boguliobov Rotations
- with real arguments only, acting on the interlaced single-particle space of
- annihilation and creation operator coefficients.
- """
-function givens_rotations(_v0::ConservesNfParity;)
- v0 = _v0.data
- N = div(length(v0), 2)
- if N == 1
- error(
- "Givens rotation on 2-element vector not allowed for ConservesNfParity-type calculations. This should have been caught elsewhere.",
- )
- end
- ElT = eltype(v0)
- gs = Circuit{ElT}([])
- v = copy(v0)
- # detect if v is actually number-conserving because only defined in terms of annihilation operators
- if norm(v[2:2:end]) < 10 * eps(real(ElT))
- r = v[1]
- gsca, _ = givens_rotations(v[1:2:end])
- replace_indices!(i -> 2 * i - 1, gsca)
- gscc = Circuit(copy(gsca.rotations))
- replace_indices!(i -> i + 1, gsca)
- conj!(gscc)
- gsc = interleave(gscc, gsca)
- LinearAlgebra.lmul!(gsc, gs)
- return gs, r
- end
- r = v[2]
- # Given's rotations from creation-operator coefficients
- gscc, _ = givens_rotations(v[2:2:end])
- replace_indices!(i -> 2 * i, gscc)
- gsca = Circuit(copy(gscc.rotations))
- replace_indices!(i -> i - 1, gsca)
- conj!(gsca)
- gsc = interleave(gscc, gsca)
- LinearAlgebra.lmul!(gsc, gs)
- # detect if v is actually number-conserving because only defined in terms of creation operators
- if norm(v[1:2:end]) < 10 * eps(real(ElT))
- return gs, r
- end
- v = gsc * v
- # if we get here, v was actually number-non conserving, so procedure
- # Given's rotations from annihilation-operator coefficients
- gsaa, _ = givens_rotations(v[3:2:end])
- replace_indices!(i -> 2 * i + 1, gsaa)
- gsac = Circuit(copy(gsaa.rotations))
- replace_indices!(i -> i + 1, gsac)
- conj!(gsac)
- gsa = interleave(gsac, gsaa)
- v = gsa * v
- LinearAlgebra.lmul!(gsa, gs)
-
- # Boguliobov rotation for remaining Bell pair
- g1, r = givens(v, 2, 3)
- g2 = Givens(1, 4, g1.c, g1.s')
- v = g1 * v
- v = g2 * v #should have no effect
- LinearAlgebra.lmul!(g2, gs)
- LinearAlgebra.lmul!(g1, gs)
- return gs, r
-end
-
-function maybe_drop_pairing_correlations(Λ0::AbstractMatrix{ElT}) where {ElT<:Number}
- Λblocked = reverse_interleave(Λ0)
- N = div(size(Λblocked, 1), 2)
- if all(x -> abs(x) <= 10 * eps(real(eltype(Λ0))), @view Λblocked[1:N, (N + 1):end])
- return ConservesNf(Λblocked[(N + 1):end, (N + 1):end])
- #return ConservesNfParity(Λ0)
- else
- return ConservesNfParity(Λ0)
- end
-end
-
-maybe_drop_pairing_correlations(Λ0::ConservesNf) = Λ0
-function maybe_drop_pairing_correlations(Λ0::ConservesNfParity)
- return maybe_drop_pairing_correlations(Λ0.data)
-end
-
-sortperm(x::ConservesNf) = sortperm(x.data; by=entropy)
-sortperm(x::ConservesNfParity) = sortperm(x.data)
-
-function get_error(x::ConservesNf, perm)
- n = x.data[first(perm)]
- return min(abs(n), abs(1 - n))
-end
-function get_error(x::ConservesNfParity, perm)
- n1 = x.data[first(perm)]
- n2 = x.data[last(perm)]
- return min(abs(n1), abs(n2))
-end
-
-function isolate_subblock_eig(
- _Λ::AbstractSymmetry,
- startind::Int;
- eigval_cutoff::Float64=1e-8,
- minblocksize::Int=2,
- maxblocksize::Int=div(size(_Λ.data, 1), 1),
-)
- blocksize = 0
- err = 0.0
- p = Int[]
- ElT = eltype(_Λ.data)
- nB = eltype(_Λ.data)[]
- uB = 0.0
- ΛB = 0.0
- i = startind
- Λ = _Λ.data
- N = size(Λ, 1)
- for blocksize in minblocksize:maxblocksize
- j = min(site_stride(_Λ) * i + site_stride(_Λ) * blocksize, N)
- ΛB = @view Λ[
- (site_stride(_Λ) * i + 1 - site_stride(_Λ)):j,
- (site_stride(_Λ) * i + 1 - site_stride(_Λ)):j,
- ]
-
- if typeof(_Λ) <: ConservesNf
- nB, uB = eigen(Hermitian(ΛB))
- elseif typeof(_Λ) <: ConservesNfParity
- m = similar(ΛB)
- m .= ΛB
- _ΛB = maybe_drop_pairing_correlations(m)
- if typeof(_ΛB) <: ConservesNf
- nB, uB = eigen(Hermitian(_ΛB.data))
- #promote basis uB to non-conserving frame
- N2 = size(nB, 1) * 2
- nuB = zeros(eltype(uB), N2, N2)
- nuB[2:2:N2, 1:2:N2] .= uB
- nuB[1:2:N2, 2:2:N2] .= conj(uB)
- uB = nuB
- nB = interleave(1 .- nB, nB)
- elseif typeof(_ΛB) <: ConservesNfParity
- nB, uB = ITensorGaussianMPS.diag_corr_gaussian(Hermitian(ΛB))
- #try to rotate to real
- uB = ITensorGaussianMPS.make_real_if_possible(uB, nB .- 0.5)
- if ElT <: Real
- if norm(imag.(uB)) <= sqrt(eps(real(ElT)))
- uB = real(real.(uB))
- else
- error(
- "Not able to construct real fermionic basis for input correlation matrix. Exiting, retry with complex input type.",
- )
- end
- end
- end
- end
- nB = set_data(_Λ, abs.(nB))
- p = sortperm(nB)
- err = get_error(nB, p)
- err ≤ eigval_cutoff && break
- end
- v = set_data(_Λ, @view uB[:, p[1]])
- return v, nB, err
-end
-
-function set_occupations!(_ns::ConservesNf, _nB::ConservesNf, _v::ConservesNf, i::Int)
- p = Int[]
- ns = _ns.data
- nB = _nB.data
- p = sortperm(nB; by=entropy)
- ns[i] = nB[p[1]]
- return nothing
-end
-
-function set_occupations!(
- _ns::ConservesNfParity, _nB::ConservesNfParity, _v::ConservesNfParity, i::Int
-)
- p = Int[]
- ns = _ns.data
- nB = _nB.data
- v = _v.data
-
- p = sortperm(nB)
- n1 = nB[first(p)]
- n2 = nB[last(p)]
- ns[2 * i] = n1
- ns[2 * i - 1] = n2
- if length(v) == 2
- # For some reason the last occupations are reversed, so take care of this conditionally here.
- # ToDo: Fix this in givens_rotations instead.
- if abs(v[1]) >= abs(v[2])
- ns[2 * i] = n2
- ns[2 * i - 1] = n1
- end
- end
- return nothing
-end
-
-stop_gmps_sweep(v::ConservesNfParity) = length(v.data) == 2 ? true : false
-stop_gmps_sweep(v::ConservesNf) = false
-
-"""
- correlation_matrix_to_gmps(Λ::AbstractMatrix{ElT}; eigval_cutoff::Float64 = 1e-8, maxblocksize::Int = size(Λ0, 1))
-
-Diagonalize a correlation matrix, returning the eigenvalues and eigenvectors
-stored in a structure as a set of Givens rotations.
-
-The correlation matrix should be Hermitian, and will be treated as if it itensor
-in the algorithm.
-
-If `is_bcs`, the correlation matrix is assumed to be in interlaced format:
-Λ[2*i-1:2*i,2*j-1:2*j]=[[c_i c_j^dagger , c_i c_j ], [c_i^dagger c_j^dagger,c_i^dagger c_j]]
-Note that this may not be the standard choice in the literature, but it is internally
-consistent with the format of single-particle Hamiltonians and Slater determinants employed.
-"""
-
-# Default to ConservesNf if no further arguments are given for backward compatibility
-function correlation_matrix_to_gmps(
- Λ0::AbstractMatrix;
- eigval_cutoff::Float64=1e-8,
- minblocksize::Int=1,
- maxblocksize::Int=size(Λ0, 1),
-)
- return correlation_matrix_to_gmps(
- ConservesNf(Λ0);
- eigval_cutoff=eigval_cutoff,
- minblocksize=minblocksize,
- maxblocksize=maxblocksize,
- )
-end
-
-function correlation_matrix_to_gmps(
- Λ0::AbstractMatrix,
- Nsites::Int;
- eigval_cutoff::Float64=1e-8,
- minblocksize::Int=1,
- maxblocksize::Int=size(Λ0, 1),
-)
- return correlation_matrix_to_gmps(
- symmetric_correlation_matrix(Λ0, Nsites);
- eigval_cutoff=eigval_cutoff,
- minblocksize=minblocksize,
- maxblocksize=maxblocksize,
- )
-end
-
-function correlation_matrix_to_gmps(
- Λ0::T;
- eigval_cutoff::Float64=1e-8,
- minblocksize::Int=1,
- maxblocksize::Int=size(Λ0.data, 1),
-) where {T<:AbstractSymmetry}
- ElT = eltype(Λ0.data)
- Λ = T(Hermitian(copy((Λ0.data))))
- V = Circuit{ElT}([])
- err_tot = 0.0 ### FIXME: keep track of error below
- N = size(Λ.data, 1)
- #ns = set_data(Λ, Vector{real(ElT)}(undef, N))
- for i in 1:div(N, site_stride(Λ))
- err = 0.0
- v, _, err = isolate_subblock_eig(
- Λ,
- i;
- eigval_cutoff=eigval_cutoff,
- minblocksize=minblocksize,
- maxblocksize=maxblocksize,
- )
- if stop_gmps_sweep(v)
- break
- end
- g, _ = givens_rotations(v)
- replace_indices!(j -> j + site_stride(Λ) * (i - 1), g)
-
- # In-place version of:
- # V = g * V
- LinearAlgebra.lmul!(g, V)
- Λ = set_data(Λ, Hermitian(g * Matrix(Λ.data) * g'))
- end
- ###return non-wrapped occupations for backwards compatibility
- ns = diag(Λ.data)
- @assert norm(imag.(ns)) <= sqrt(eps(real(ElT)))
-
- return real(real.(ns)), V
-end
-
-function (x::AbstractSymmetry * y::AbstractSymmetry)
- if !has_same_symmetry(x, y)
- error("Can't multiply two symmetric objects with different symmetries.")
- end
- return set_data(x, x.data * y.data)
-end
-
-has_same_symmetry(::AbstractSymmetry, ::AbstractSymmetry) = false
-has_same_symmetry(::ConservesNf, ::ConservesNf) = true
-has_same_symmetry(::ConservesNfParity, ::ConservesNfParity) = true
-
-function slater_determinant_to_gmps(Φ::AbstractMatrix, N::Int; kwargs...)
- return correlation_matrix_to_gmps(conj(Φ) * transpose(Φ), N; kwargs...)
-end
-
-function slater_determinant_to_gmps(Φ::AbstractMatrix; kwargs...)
- return correlation_matrix_to_gmps(ConservesNf(conj(Φ) * transpose(Φ)); kwargs...)
-end
-
-function slater_determinant_to_gmps(Φ::AbstractSymmetry; kwargs...)
- return correlation_matrix_to_gmps(conj(Φ) * transpose(Φ); kwargs...)
-end
-
-#
-# Turn circuit into MPS
-#
-
-function ITensors.ITensor(u::Givens, s1::Index, s2::Index)
- U = [
- 1 0 0 0
- 0 u.c u.s 0
- 0 -conj(u.s) u.c 0
- 0 0 0 1
- ]
- return itensor(U, s2', s1', dag(s2), dag(s1))
-end
-
-function ITensors.ITensor(b::Boguliobov, s1::Index, s2::Index)
- U = [
- b.u.c 0 0 conj(b.u.s)
- 0 1 0 0
- 0 0 1 0
- -(b.u.s) 0 0 b.u.c
- ]
- return itensor(U, s2', s1', dag(s2), dag(s1))
-end
-
-function ITensors.ITensor(sites::Vector{<:Index}, u::ConservesNfParity{Givens{T}}) where {T}
- s1 = sites[div(u.data.i1 + 1, 2)]
- s2 = sites[div(u.data.i2 + 1, 2)]
- if abs(u.data.i2 - u.data.i1) % 2 == 1
- return ITensor(Boguliobov(u.data), s1, s2)
- else
- return ITensor(u.data, s1, s2)
- end
-end
-
-function ITensors.ITensor(sites::Vector{<:Index}, u::ConservesNf{Givens{T}}) where {T}
- return ITensor(sites, u.data)
-end
-
-function ITensors.ITensor(sites::Vector{<:Index}, u::Givens)
- s1 = sites[u.i1]
- s2 = sites[u.i2]
- return ITensor(u, s1, s2)
-end
-
-function itensors(s::Vector{<:Index}, C::ConservesNfParity)
- U = [ITensor(s, set_data(C, g)) for g in reverse(C.data.rotations[begin:2:end])]
- return U
-end
-
-function itensors(sites::Vector{<:Index}, C::ConservesNf)
- return itensors(sites, C.data)
-end
-
-function itensors(s::Vector{<:Index}, C::Circuit)
- U = [ITensor(s, g) for g in reverse(C.rotations)]
- return U
-end
-
-"""
- MPS(sites::Vector{<:Index}, state, U::Vector{<:ITensor}; kwargs...)
-
-Return an MPS with site indices `sites` by applying the circuit `U` to the starting state `state`.
-"""
-function ITensors.MPS(sites::Vector{<:Index}, state, U::Vector{<:ITensor}; kwargs...)
- return apply(U, productMPS(sites, state); kwargs...)
-end
-
-function isspinful(s::Index)
- !hasqns(s) && return false
- return all(qnblock -> ITensors.hasname(qn(qnblock), ITensors.QNVal("Sz", 0)), space(s))
-end
-
-function isspinful(s::Vector{<:Index})
- return all(isspinful, s)
-end
-
-# Checks whether correlation matrix is of a number conserving system and returns AbstractSymmetry wrapper around correlation matrix
-# ToDo: Behaviour assumes (spinless) "Fermion" sites, handle "Electron" sites separately for cases where correlation matrix does not factorize.
-function symmetric_correlation_matrix(Λ::AbstractMatrix, s::Vector{<:Index})
- if length(s) == size(Λ, 1)
- return ConservesNf(Λ)
- elseif 2 * length(s) == size(Λ, 1)
- return ConservesNfParity(Λ)
- else
- return error("Correlation matrix is not the same or twice the length of sites")
- end
-end
-
-function symmetric_correlation_matrix(Λ::AbstractMatrix, Nsites::Int)
- if Nsites == size(Λ, 1)
- return ConservesNf(Λ)
- elseif 2 * Nsites == size(Λ, 1)
- return ConservesNfParity(Λ)
- else
- return error("Correlation matrix is not the same or twice the length of sites")
- end
-end
-
-function correlation_matrix_to_mps(
- s::Vector{<:Index},
- Λ::AbstractMatrix;
- eigval_cutoff::Float64=1e-8,
- maxblocksize::Int=size(Λ, 1),
- minblocksize::Int=1,
- kwargs...,
-)
- return correlation_matrix_to_mps(
- s,
- symmetric_correlation_matrix(Λ, s);
- eigval_cutoff=eigval_cutoff,
- maxblocksize=maxblocksize,
- minblocksize=minblocksize,
- kwargs...,
- )
-end
-
-"""
- correlation_matrix_to_mps(s::Vector{<:Index}, Λ::AbstractMatrix{ElT};
- eigval_cutoff::Float64 = 1e-8,
- maxblocksize::Int = size(Λ, 1),
- kwargs...)
-
-Return an approximation to the state represented by the correlation matrix as
-a matrix product state (MPS).
-
-The correlation matrix should correspond to a pure state (have all eigenvalues
-of zero or one).
-"""
-function correlation_matrix_to_mps(
- s::Vector{<:Index},
- Λ0::AbstractSymmetry;
- eigval_cutoff::Float64=1e-8,
- maxblocksize::Int=size(Λ0.data, 1),
- minblocksize::Int=1,
- kwargs...,
-)
- MPS_Elt = eltype(Λ0.data)
- Λ = maybe_drop_pairing_correlations(Λ0)
- @assert size(Λ.data, 1) == size(Λ.data, 2)
- ns, C = correlation_matrix_to_gmps(
- Λ; eigval_cutoff=eigval_cutoff, minblocksize=minblocksize, maxblocksize=maxblocksize
- )
- if all(hastags("Fermion"), s)
- U = itensors(s, set_data(Λ, C))
- ψ = MPS(MPS_Elt, s, n -> round(Int, ns[site_stride(Λ) * n]) + 1)
- ψ = apply(U, ψ; kwargs...)
- elseif all(hastags("Electron"), s)
- # ToDo: This is not tested properly, Electron sitetype tests currently assume interface with two AbstractSymmetry (correlation matrix) arguments
- # FIXME: isodd is not correct here, there shouldn't be any restrictions on the number of electronic sites.
- isodd(length(s)) && error(
- "For Electron type, must have even number of sites of alternating up and down spins.",
- )
- N = length(s)
- if isspinful(s)
- # FIXME: Can we lift this restriction now, at least for ConservesNf?
- error(
- "correlation_matrix_to_mps(Λ::AbstractMatrix) currently only supports spinless Fermions or Electrons that do not conserve Sz. Use correlation_matrix_to_mps(Λ_up::AbstractMatrix, Λ_dn::AbstractMatrix) to use spinful Fermions/Electrons.",
- )
- elseif typeof(Λ) <: ConservesNf
- sf = siteinds("Fermion", 2 * N; conserve_qns=true)
- elseif typeof(Λ) <: ConservesNfParity
- # FIXME: Does this also break, even if it doesn't make use of identity blocks? To be safe, issue error.
- error(
- "ConservesNfParity and Electron site type currently not supported. Please use Fermion sites instead.",
- )
- sf = siteinds("Fermion", 2 * N; conserve_qns=false, conserve_nfparity=true)
- end
- U = itensors(sf, set_data(Λ, C))
- ψ = MPS(MPS_Elt, sf, n -> round(Int, ns[site_stride(Λ) * n]) + 1)
- ψ = apply(U, ψ; kwargs...)
- ψ = MPS(N)
- for n in 1:N
- i, j = 2 * n - 1, 2 * n
- C = combiner(sf[i], sf[j])
- c = combinedind(C)
- ψ[n] = ψf[i] * ψf[j] * C
- ψ[n] *= δ(dag(c), s[n]) ###This back conversion to Electron will likely not work reliably for ConservesNfParity
- end
- else
- error("All sites must be Fermion or Electron type.")
- end
- return ψ
-end
-
-"""
- slater_determinant_to_mps(s::Vector{<:Index}, Φ::AbstractMatrix; kwargs...)
-
-Given indices and matrix of orbitals representing a Slater determinant,
-compute a matrix product state (MPS) approximately having the same correlation
-matrices as this Slater determinant.
-
-Optional keyword arguments:
-* `eigval_cutoff::Float64=1E-8` - cutoff used to adaptively determine the block size (eigenvalues must be closer to 1 or 0 by an amount smaller than this cutoff for their eigenvectors be labeled as "inactive" orbitals)
-* `maxblocksize::Int` - maximum block size used to compute inactive orbitals. Setting this to a smaller value can lead to faster running times and a smaller MPS bond dimension, though the accuracy may be lower.
-"""
-function slater_determinant_to_mps(s::Vector{<:Index}, Φ::AbstractMatrix; kwargs...)
- return correlation_matrix_to_mps(s, conj(Φ) * transpose(Φ); kwargs...)
-end
-
-function slater_determinant_to_mps(s::Vector{<:Index}, Φ::AbstractSymmetry; kwargs...)
- return correlation_matrix_to_mps(s, conj(Φ) * transpose(Φ); kwargs...)
-end
-
-function slater_determinant_to_mps(
- s::Vector{<:Index}, Φ_up::AbstractMatrix, Φ_dn::AbstractMatrix; kwargs...
-)
- return correlation_matrix_to_mps(
- s, conj(Φ_up) * transpose(Φ_up), conj(Φ_dn) * transpose(Φ_dn); kwargs...
- )
-end
-
-function mapindex(f::Function, C::Circuit)
- return Circuit(mapindex.(f, C.rotations))
-end
-
-function mapindex(f::Function, g::Givens)
- return Givens(f(g.i1), f(g.i2), g.c, g.s)
-end
-
-function identity_blocks!(T::Tensor)
- # FIXME: This is not generic logic. Only works reliably for QN subspace sizes = 1.
- for b in nzblocks(T)
- T[b] = Matrix{Float64}(I, dims(T[b]))
- end
- return T
-end
-
-# Creates an ITensor with the specified flux where each nonzero block
-# is identity
-# TODO: make a special constructor for this.
-# TODO: Introduce a modified combiner which keeps track of state-ordering/spaces.
-function identity_blocks_itensor(flux::QN, i1::Index, i2::Index)
- A = ITensor(flux, i1, i2)
- identity_blocks!(tensor(A))
- return A
-end
-
-function identity_blocks_itensor(i1::ITensors.QNIndex, i2::ITensors.QNIndex)
- return identity_blocks_itensor(QN(), i1, i2)
-end
-
-function identity_blocks_itensor(i1::Index, i2::Index)
- M = Matrix{Float64}(I, dim(i1), dim(i2))
- return itensor(M, i1, i2)
-end
-
-convert_union_nothing(v::Vector{T}) where {T} = convert(Vector{Union{T,Nothing}}, v)
-
-function interleave(xs...)
- nexts = convert_union_nothing(collect(Base.iterate.(xs)))
- res = Union{eltype.(xs)...}[]
- while any(!isnothing, nexts)
- for ii in eachindex(nexts)
- if !isnothing(nexts[ii])
- (item, state) = nexts[ii]
- push!(res, item)
- nexts[ii] = iterate(xs[ii], state)
- end
- end
- end
- return res
-end
-
-function interleave(a::ConservesNf{T}, b::ConservesNf{T}) where {T}
- return set_data(a, interleave(a.data, b.data))
-end
-function interleave(a::ConservesNfParity{T}, b::ConservesNfParity{T}) where {T}
- return set_data(
- a,
- interleave(
- interleave(a.data[1:2:end], b.data[1:2:end]),
- interleave(a.data[2:2:end], b.data[2:2:end]),
- ),
- )
-end
-
-function interleave(M::AbstractMatrix)
- @assert size(M, 1) == size(M, 2)
- n = div(size(M, 1), 2)
- first_half = Vector(1:n)
- second_half = Vector((n + 1):(2 * n))
- interleaved_inds = interleave(first_half, second_half)
- return M[interleaved_inds, interleaved_inds]
-end
-
-function interleave(g1::Circuit, g2::Circuit)
- return Circuit(interleave(g1.rotations, g2.rotations))
-end
-
-function reverse_interleave(M::AbstractMatrix)
- @assert size(M, 1) == size(M, 2)
- n = div(size(M, 1), 2)
- first_half = Vector(1:n)
- second_half = Vector((n + 1):(2 * n))
- interleaved_inds = interleave(first_half, second_half)
- ordered_inds = sortperm(interleaved_inds)
- return M[ordered_inds, ordered_inds]
-end
-
-function correlation_matrix_to_mps(
- s::Vector{<:Index},
- Λ_up0::AbstractSymmetry,
- Λ_dn0::AbstractSymmetry;
- eigval_cutoff::Float64=1e-8,
- maxblocksize::Int=min(size(Λ_up0, 1), size(Λ_dn0, 1)),
- minblocksize::Int=1,
- kwargs...,
-)
- MPS_Elt = promote_type(eltype(Λ_up0.data), eltype(Λ_dn0.data))
- Λ_up = maybe_drop_pairing_correlations(Λ_up0)
- Λ_dn = maybe_drop_pairing_correlations(Λ_dn0)
- @assert size(Λ_up.data, 1) == size(Λ_up.data, 2)
- @assert size(Λ_dn.data, 1) == size(Λ_dn.data, 2)
-
- if !(
- (typeof(Λ_up) <: ConservesNfParity && typeof(Λ_dn) <: ConservesNfParity) ||
- (typeof(Λ_up) <: ConservesNf && typeof(Λ_dn) <: ConservesNf)
- )
- error("Λ_up and Λ_dn have incompatible subtypes of AbstractSymmetry")
- end
-
- N_up = div(size(Λ_up.data, 1), site_stride(Λ_up))
- N_dn = div(size(Λ_dn.data, 1), site_stride(Λ_up))
- N = N_up + N_dn
- ns_up, C_up = correlation_matrix_to_gmps(
- Λ_up; eigval_cutoff=eigval_cutoff, maxblocksize=maxblocksize
- )
- ns_dn, C_dn = correlation_matrix_to_gmps(
- Λ_dn; eigval_cutoff=eigval_cutoff, maxblocksize=maxblocksize
- )
- C_up = mapindex(n -> 2n - 1, C_up)
- C_dn = mapindex(n -> 2n, C_dn)
- C_up_rot = set_data(Λ_up, C_up.rotations)
- C_dn_rot = set_data(Λ_dn, C_dn.rotations)
- ns_up = set_data(Λ_up, ns_up)
- ns_dn = set_data(Λ_dn, ns_dn)
- C = Circuit(interleave(C_up_rot, C_dn_rot).data)
- ns = interleave(ns_up, ns_dn).data
- if all(hastags("Fermion"), s)
- U = itensors(s, set_data(Λ_up, C))
- ψ = MPS(MPS_Elt, s, n -> round(Int, ns[site_stride(Λ_up) * n]) + 1)
- ψ = apply(U, ψ; kwargs...)
- elseif all(hastags("Electron"), s)
- @assert length(s) == N_up
- @assert length(s) == N_dn
- if isspinful(s)
- if typeof(Λ_up) <: ConservesNf
- space_up = [QN(("Nf", 0, -1), ("Sz", 0)) => 1, QN(("Nf", 1, -1), ("Sz", 1)) => 1]
- space_dn = [QN(("Nf", 0, -1), ("Sz", 0)) => 1, QN(("Nf", 1, -1), ("Sz", -1)) => 1]
- elseif typeof(Λ_up) <: ConservesNfParity
- error(
- "ConservesNfParity and Electron site type currently not supported. Please use Fermion sites instead.",
- )
- # FIXME: issue with combiner-logic for subspace-size > 1 in identity_blocks_itensor, see below
- space_up = [QN(("NfParity", 0, -2),) => 1, QN(("NfParity", 1, -2),) => 1]
- space_dn = [QN(("NfParity", 0, -2),) => 1, QN(("NfParity", 1, -2),) => 1]
- end
- sf_up = [Index(space_up, "Fermion,Site,n=$(2n-1)") for n in 1:N_up]
- sf_dn = [Index(space_dn, "Fermion,Site,n=$(2n)") for n in 1:N_dn]
- sf = collect(Iterators.flatten(zip(sf_up, sf_dn)))
- else
- if typeof(Λ_up) <: ConservesNf
- sf = siteinds("Fermion", N; conserve_qns=true, conserve_sz=false)
- elseif typeof(Λ_up) <: ConservesNfParity
- error(
- "ConservesNfParity and Electron site type currently not supported. Please use Fermion sites instead.",
- )
- sf = siteinds(
- "Fermion", N; conserve_qns=false, conserve_sz=false, conserve_nfparity=true
- )
- end
- end
- U = itensors(sf, set_data(Λ_up, C))
- ψf = MPS(MPS_Elt, sf, n -> round(Int, ns[site_stride(Λ_up) * n]) + 1)
- ψf = apply(U, ψf; kwargs...)
- ψ = MPS(N_up)
- for n in 1:N_up
- i, j = 2 * n - 1, 2 * n
- C = combiner(sf[i], sf[j])
- c = combinedind(C)
- ψ[n] = ψf[i] * ψf[j] * C
- # FIXME: combiner looses track of state ordering for QN subspaces > 1 in identity_blocks_itensor
- ψ[n] *= identity_blocks_itensor(dag(c), s[n])
- end
- else
- error("All sites must be Fermion or Electron type.")
- end
-
- return ψ
-end
-
-function correlation_matrix_to_mps(
- s::Vector{<:Index},
- Λ_up::AbstractMatrix,
- Λ_dn::AbstractMatrix;
- eigval_cutoff::Float64=1e-8,
- maxblocksize::Int=min(size(Λ_up, 1), size(Λ_dn, 1)),
- minblocksize::Int=1,
- kwargs...,
-)
- if all(hastags("Electron"), s)
- return correlation_matrix_to_mps(
- s,
- symmetric_correlation_matrix(Λ_up, s),
- symmetric_correlation_matrix(Λ_dn, s);
- eigval_cutoff=eigval_cutoff,
- maxblocksize=maxblocksize,
- minblocksize=minblocksize,
- kwargs...,
- )
- elseif all(hastags("Fermion"), s)
- # equivalent number of electrons
- n_electrons = div(length(s), 2)
- return correlation_matrix_to_mps(
- s,
- symmetric_correlation_matrix(Λ_up, n_electrons),
- symmetric_correlation_matrix(Λ_dn, n_electrons);
- eigval_cutoff=eigval_cutoff,
- maxblocksize=maxblocksize,
- minblocksize=minblocksize,
- kwargs...,
- )
- end
-end
diff --git a/ITensorGaussianMPS/src/linalg.jl b/ITensorGaussianMPS/src/linalg.jl
deleted file mode 100644
index ec9a054987..0000000000
--- a/ITensorGaussianMPS/src/linalg.jl
+++ /dev/null
@@ -1,201 +0,0 @@
-"""
-Some of the functionality in this script is closely related to routines in the following package
-https://github.com/Jacupo/F_utilities
-and the associated publication
-10.21468/SciPostPhysLectNotes.54
-"""
-
-"""Takes a single-particle Hamiltonian in blocked Dirac format and finds the fermionic transformation U that diagonalizes it"""
-function _eigen_gaussian_blocked(H; noise_scale=nothing)
- #make sure H is Hermitian
- @assert ishermitian(H)
- H = Hermitian(H)
- ElT = eltype(H)
- #convert from Dirac to Majorana picture
- N = size(H, 1)
- Ω = build_Ω(ElT, N)
- h = real(-im .* (Ω * H * Ω'))
- h = (h - h') ./ 2
- #@show size(h)
- if !isnothing(noise_scale)
- noise = rand(size(h)...) * noise_scale
- noise = (noise - noise') ./ 2
- h = h + noise
- end
- # Schur diagonalize including reordering
- _, O, vals = order_schur(schur(h))
- # convert back to Dirac Frame
- Fxpxx = build_Fxpxx(N)
- U = Ω' * O * (Fxpxx') * Ω
- d = vcat(-vals, vals)
- if ElT <: Real
- U = make_real_if_possible(U, d)
- # make another pass with rotation in the complex plane per eigenvector
- U .*= exp.(-im * angle.(U[1:1, :]))
- @assert norm(imag.(U)) < sqrt(eps(real(ElT)))
- U = real(real.(U))
- end
- return d, U
-end
-
-"""Takes a single-particle Hamiltonian in interlaced Dirac format and finds the complex fermionic transformation U that diagonalizes it."""
-function eigen_gaussian(H; noise_scale=nothing)
- d, U = _eigen_gaussian_blocked(
- ITensorGaussianMPS.reverse_interleave(complex(H)); noise_scale=noise_scale
- )
- nU = similar(U)
- n = div(size(H, 1), 2)
- nU[1:2:end, :] = U[1:n, :]
- nU[2:2:end, :] = U[(n + 1):end, :]
- return d, nU
-end
-
-"""Takes a single-particle Hamiltonian in interlaced Dirac format and outputs the ground state correlation matrix (with the input Hamiltonians element type)."""
-function get_gaussian_GS_corr(H::AbstractMatrix; noise_scale=nothing)
- ElT = eltype(H)
- d, U = eigen_gaussian(H; noise_scale=noise_scale)
- n = div(size(H, 1), 2)
- c = conj(U[:, 1:n]) * transpose(U[:, 1:n])
- if ElT <: Real && norm(imag.(c)) <= sqrt(eps(real(ElT)))
- c = real(real.(c))
- end
- return c
-end
-
-"""Takes a single-particle correlation matrix in interlaced Dirac format and finds the fermionic transformation U that diagonalizes it"""
-function diag_corr_gaussian(Λ::Hermitian; noise_scale=nothing)
- #shift correlation matrix by half so spectrum is symmetric around 0
- populations, U = eigen_gaussian(Λ - 0.5 * I; noise_scale=noise_scale)
- n = diag(U' * Λ * U)
- if !all(abs.(populations - (n - 0.5 * ones(size(n)))) .< sqrt(eps(real(eltype(Λ)))))
- @show n
- @show populations .+ 0.5
- @error(
- "The natural orbital populations are not consistent, see above. Try adding symmetric noise to the input matrix."
- )
- end
- return populations .+ 0.5, U
-end
-
-"""Takes a single-particle correlation matrix in interlaced Dirac format and finds the fermionic transformation U that diagonalizes it"""
-function diag_corr_gaussian(Γ::AbstractMatrix; noise_scale=nothing)
- #enforcing hermitianity
- Γ = (Γ + Γ') / 2.0
- return diag_corr_gaussian(Hermitian(Γ); noise_scale=noise_scale)
-end
-
-"""Schur decomposition of skew-hermitian matrix"""
-function order_schur(F::LinearAlgebra.Schur)
- T = F.Schur
- O = F.vectors #column vectors are Schur vectors
-
- N = size(T, 1)
- n = div(N, 2)
- shuffled_inds = Vector{Int}[]
- ElT = eltype(T)
- vals = ElT[]
- # build a permutation matrix that takes care of the ordering
- for i in 1:n
- ind = 2 * i - 1
- val = T[ind, ind + 1]
- if real(val) >= 0
- push!(shuffled_inds, [ind, ind + 1])
- else
- push!(shuffled_inds, [ind + 1, ind])
- end
- push!(vals, abs(val))
- end
- # build block local rotation first
- perm = sortperm(real.(vals); rev=true) ##we want the upper left corner to be the largest absolute value eigval pair?
- vals = vals[perm]
- shuffled_inds = reduce(vcat, shuffled_inds[perm])
- # then permute blocks for overall ordering
- T = T[shuffled_inds, shuffled_inds]
- O = O[:, shuffled_inds]
- return T, O, vals #vals are only positive, and of length n and not N
-end
-
-"""Checks if we can make degenerate subspaces of a U0 real by multiplying columns or rows with a phase"""
-function make_real_if_possible(U0::AbstractMatrix, spectrum::Vector; sigdigits=12)
- # only apply to first half of spectrum due to symmetry around 0
- # assumes spectrum symmetric around zero and ordered as vcat(-E,E) where E is ordered in descending magnitude
- U = copy(U0)
- n = div(length(spectrum), 2)
- # Round spectrum for comparison within finite floating point precision.
- # Not the cleanest way to compare floating point numbers for approximate equality but should be sufficient here.
- rounded_halfspectrum = round.(spectrum[1:n], sigdigits=sigdigits)
- approx_unique_eigvals = unique(rounded_halfspectrum)
- # loop over degenerate subspaces
- for e in approx_unique_eigvals
- mask = rounded_halfspectrum .== e
- if abs(e) < eps(real(eltype(U0)))
- # handle values close to zero separately
- # rotate subspace for both positive and negative eigenvalue if they are close enough to zero
- mask = vcat(mask, mask)
- subspace = U[:, mask]
- subspace = make_subspace_real_if_possible(subspace)
- U[:, mask] = subspace
-
- else
- mask = rounded_halfspectrum .== e
- # rotate suspace for the negative eigenvalue
- subspace = U[:, 1:n][:, mask]
- subspace = make_subspace_real_if_possible(subspace)
- v = @views U[:, 1:n][:, mask]
- v .= subspace
- # rotate suspace for the positive eigenvalue
- subspace = U[:, (n + 1):end][:, mask]
- subspace = make_subspace_real_if_possible(subspace)
- v = @views U[:, (n + 1):end][:, mask]
- v .= subspace
- end
- end
- return U
-end
-
-"""Checks if we can make a degenerate subspace of the eigenbasis of an operator real by multiplying columns or rows with a phase"""
-function make_subspace_real_if_possible(U::AbstractMatrix; atol=sqrt(eps(real(eltype(U)))))
- if eltype(U) <: Real
- return U
- end
- if size(U, 2) == 1
- nU = U .* exp(-im * angle(U[1, 1]))
- if norm(imag.(nU)) .<= atol
- return nU
- else
- return U
- end
- else
- n = size(U, 2)
- gram = U * U'
- if norm(imag.(gram)) .<= atol
- D, V = eigen(Hermitian(real.(gram)))
- proj = V[:, (size(U, 1) - n + 1):end] * (V[:, (size(U, 1) - n + 1):end])'
- @assert norm(proj * U - U) < atol
- return complex(V[:, (size(U, 1) - n + 1):end])
- else
- return U
- end
- end
-end
-
-# transformation matrices (in principle sparse) between and within Majorana and Dirac picture
-
-function build_Ω(T, N::Int)
- n = div(N, 2)
- nElT = T <: Real ? Complex{T} : T
- Ω = zeros(nElT, N, N)
- Ω[1:n, 1:n] .= diagm(ones(nElT, n) ./ sqrt(2))
- Ω[1:n, (n + 1):N] .= diagm(ones(nElT, n) ./ sqrt(2))
- Ω[(n + 1):N, 1:n] .= diagm(ones(nElT, n) * (im / sqrt(2)))
- Ω[(n + 1):N, (n + 1):N] .= diagm(ones(nElT, n) * (-im / sqrt(2)))
- return Ω
-end
-
-function build_Fxpxx(N::Int)
- Fxpxx = zeros(Int8, N, N)
- n = div(N, 2)
- Fxpxx[1:n, 1:2:N] .= diagm(ones(Int8, n))
- Fxpxx[(n + 1):N, 2:2:N] .= diagm(ones(Int8, n))
- return Fxpxx
-end
diff --git a/ITensorGaussianMPS/test/Project.toml b/ITensorGaussianMPS/test/Project.toml
deleted file mode 100644
index 622e01b82b..0000000000
--- a/ITensorGaussianMPS/test/Project.toml
+++ /dev/null
@@ -1,4 +0,0 @@
-[deps]
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
diff --git a/ITensorGaussianMPS/test/electron.jl b/ITensorGaussianMPS/test/electron.jl
deleted file mode 100644
index 4d260d1408..0000000000
--- a/ITensorGaussianMPS/test/electron.jl
+++ /dev/null
@@ -1,279 +0,0 @@
-using ITensorGaussianMPS
-using ITensors
-using LinearAlgebra
-using Test
-
-function expect_compat(psi::MPS, ops::AbstractString...; kwargs...)
- if ITensors.version() >= v"0.2"
- return expect(psi, ops...; kwargs...)
- end
- psi = copy(psi)
- N = length(psi)
- ElT = real(promote_itensor_eltype(psi))
- Nops = length(ops)
- s = siteinds(psi)
- site_range::UnitRange{Int} = get(kwargs, :site_range, 1:N)
- Ns = length(site_range)
- start_site = first(site_range)
- offset = start_site - 1
- orthogonalize!(psi, start_site)
- psi[start_site] ./= norm(psi[start_site])
- ex = ntuple(n -> zeros(ElT, Ns), Nops)
- for j in site_range
- orthogonalize!(psi, j)
- for n in 1:Nops
- ex[n][j - offset] = real(scalar(psi[j] * op(ops[n], s[j]) * dag(prime(psi[j], s[j]))))
- end
- end
- return Nops == 1 ? ex[1] : ex
-end
-
-@testset "Electron" begin
- # Half filling
- N = 40
- Nf_up = N ÷ 2
- Nf_dn = N ÷ 2
- Nf = Nf_up + Nf_dn
-
- # Maximum MPS link dimension
- _maxlinkdim = 200
-
- # DMRG cutoff
- _cutoff = 1e-8
-
- # Hopping
- t = 1.0
-
- # Electron-electron on-site interaction
- U = 1.0
-
- # Make the free fermion Hamiltonian for the up spins
- os_up = OpSum()
- for n in 1:(N - 1)
- os_up .+= -t, "Cdagup", n, "Cup", n + 1
- os_up .+= -t, "Cdagup", n + 1, "Cup", n
- end
-
- # Make the free fermion Hamiltonian for the down spins
- os_dn = OpSum()
- for n in 1:(N - 1)
- os_dn .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_dn .+= -t, "Cdagdn", n + 1, "Cdn", n
- end
-
- # Hopping Hamiltonians for the up and down spins
- h_up = hopping_hamiltonian(os_up)
- h_dn = hopping_hamiltonian(os_dn)
- h_combined = hopping_hamiltonian(os_up, os_dn)
-
- # Get the Slater determinant
- Φ_up = slater_determinant_matrix(h_up, Nf_up)
- Φ_dn = slater_determinant_matrix(h_dn, Nf_dn)
-
- # Create an MPS from the slater determinants.
- s = siteinds("Electron", N; conserve_qns=true)
- ψ0 = slater_determinant_to_mps(
- s, Φ_up, Φ_dn; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
- )
-
- @test maxlinkdim(ψ0) ≤ _maxlinkdim
-
- # The total non-interacting part of the Hamiltonian
- os_noninteracting = OpSum()
- for n in 1:(N - 1)
- os_noninteracting .+= -t, "Cdagup", n, "Cup", n + 1
- os_noninteracting .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_noninteracting .+= -t, "Cdagup", n + 1, "Cup", n
- os_noninteracting .+= -t, "Cdagdn", n + 1, "Cdn", n
- end
-
- H_noninteracting = MPO(os_noninteracting, s)
- @test tr(Φ_up' * h_up * Φ_up) + tr(Φ_dn' * h_dn * Φ_dn) ≈ inner(ψ0', H_noninteracting, ψ0) rtol =
- 1e-3
-
- # The total interacting Hamiltonian
- os_interacting = OpSum()
- for n in 1:(N - 1)
- os_interacting .+= -t, "Cdagup", n, "Cup", n + 1
- os_interacting .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_interacting .+= -t, "Cdagup", n + 1, "Cup", n
- os_interacting .+= -t, "Cdagdn", n + 1, "Cdn", n
- end
- for n in 1:N
- os_interacting .+= U, "Nupdn", n
- end
- H = MPO(os_interacting, s)
-
- # Random starting state
- ψr = randomMPS(s, n -> n ≤ Nf ? (isodd(n) ? "↑" : "↓") : "0")
-
- @test flux(ψr) == QN(("Nf", Nf, -1), ("Sz", 0))
- @test flux(ψ0) == QN(("Nf", Nf, -1), ("Sz", 0))
-
- @test inner(ψ0', H, ψ0) < inner(ψr', H, ψr)
-
- sweeps = Sweeps(3)
- setmaxdim!(sweeps, 10, 20, _maxlinkdim)
- setcutoff!(sweeps, _cutoff)
- setnoise!(sweeps, 1e-5, 1e-6, 1e-7, 0.0)
- er, _ = dmrg(H, ψr, sweeps; outputlevel=0)
-
- sweeps = Sweeps(3)
- setmaxdim!(sweeps, _maxlinkdim)
- setcutoff!(sweeps, _cutoff)
- setnoise!(sweeps, 1e-5, 1e-6, 1e-7, 0.0)
- e0, _ = dmrg(H, ψ0, sweeps; outputlevel=0)
-
- @test e0 > inner(ψ0', H_noninteracting, ψ0)
- @test e0 < er
-end
-
-@testset "Regression test for bug away from half filling" begin
- N = 3
- t = 1.0
- os_up = OpSum()
- for n in 1:(N - 1)
- os_up .+= -t, "Cdagup", n, "Cup", n + 1
- os_up .+= -t, "Cdagup", n + 1, "Cup", n
- end
- os_dn = OpSum()
- for n in 1:(N - 1)
- os_dn .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_dn .+= -t, "Cdagdn", n + 1, "Cdn", n
- end
- h_up = hopping_hamiltonian(os_up)
- h_dn = hopping_hamiltonian(os_dn)
- s = siteinds("Electron", N; conserve_qns=true)
- H = MPO(os_up + os_dn, s)
- Nf_up, Nf_dn = 1, 0
- Φ_up = slater_determinant_matrix(h_up, Nf_up)
- Φ_dn = slater_determinant_matrix(h_dn, Nf_dn)
- ψ = slater_determinant_to_mps(s, Φ_up, Φ_dn; eigval_cutoff=0.0, cutoff=0.0)
- @test inner(ψ', H, ψ) ≈ tr(Φ_up' * h_up * Φ_up) + tr(Φ_dn' * h_dn * Φ_dn)
- @test maxlinkdim(ψ) == 2
- @test flux(ψ) == QN(("Nf", 1, -1), ("Sz", 1))
- ns_up = expect_compat(ψ, "Nup")
- ns_dn = expect_compat(ψ, "Ndn")
- @test ns_up ≈ diag(Φ_up * Φ_up')
- @test ns_dn ≈ diag(Φ_dn * Φ_dn')
- @test sum(ns_up) ≈ Nf_up
- @test sum(ns_dn) ≈ Nf_dn
-end
-
-@testset "Electron - Pairing (currently inactive)" begin
- # Keep this testset for when the Electron-sites + pairing bug is fixed
- # But skip the tests for now.
- is_implemented = false
- if !is_implemented
- nothing
- else
- # Half filling
- N = 40
- Nf_up = N ÷ 2
- Nf_dn = N ÷ 2
- Nf = Nf_up + Nf_dn
-
- # Maximum MPS link dimension
- _maxlinkdim = 200
-
- # DMRG cutoff
- _cutoff = 1e-8
-
- # Hopping
- t = 1.0
- pairing = 1.2
- # Electron-electron on-site interaction
- U = 1.0
-
- # Make the free fermion Hamiltonian for the up spins
- os_up = OpSum()
- for n in 1:(N - 1)
- os_up .+= -t, "Cdagup", n, "Cup", n + 1
- os_up .+= -t, "Cdagup", n + 1, "Cup", n
- os_up .+= -pairing, "Cdagup", n + 1, "Cdagup", n
- os_up .+= -pairing, "Cup", n, "Cup", n + 1
-
- #os_up .+= -pairing, "Cdagup", n+1,"Cdagup", n
- end
-
- # Make the free fermion Hamiltonian for the down spins
- os_dn = OpSum()
- for n in 1:(N - 1)
- os_dn .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_dn .+= -t, "Cdagdn", n + 1, "Cdn", n
- os_dn .+= -pairing, "Cdn", n, "Cdn", n + 1
- os_dn .+= -pairing, "Cdagdn", n + 1, "Cdagdn", n
- end
-
- # Hopping Hamiltonians for the up and down spins
- h_up = quadratic_hamiltonian(os_up)
- h_dn = quadratic_hamiltonian(os_dn)
-
- # Get the Slater determinant, N*2 because of pairing (should pass chemical potential as arg later)
- Φ_up = slater_determinant_matrix(h_up, Nf_up * 2)
- Φ_dn = slater_determinant_matrix(h_dn, Nf_dn * 2)
-
- # Create an MPS from the slater determinants.
- s = siteinds(
- "Electron", N; conserve_qns=false, conserve_nfparity=true, conserve_nf=false
- )
- H_ni_up = MPO(os_up, s)
-
- ψ0 = slater_determinant_to_mps(
- s, Φ_up, Φ_dn; eigval_cutoff=1e-4, cutoff=_cutoff, maxdim=_maxlinkdim
- )
- @show norm(ψ0)
- @test maxlinkdim(ψ0) ≤ _maxlinkdim
-
- # The total non-interacting part of the Hamiltonian
- os_noninteracting = OpSum()
- for n in 1:(N - 1)
- os_noninteracting .+= -t, "Cdagdn", n, "Cdn", n + 1
- os_noninteracting .+= -t, "Cdagdn", n + 1, "Cdn", n
- os_noninteracting .+= -pairing, "Cdn", n, "Cdn", n + 1
- os_noninteracting .+= -pairing, "Cdagdn", n + 1, "Cdagdn", n
- os_noninteracting .+= -t, "Cdagup", n, "Cup", n + 1
- os_noninteracting .+= -t, "Cdagup", n + 1, "Cup", n
- os_noninteracting .+= -pairing, "Cdagup", n + 1, "Cdagup", n
- os_noninteracting .+= -pairing, "Cup", n, "Cup", n + 1
- end
-
- H_noninteracting = MPO(os_noninteracting, s)
- @show tr(Φ_up' * h_up * Φ_up),
- tr(Φ_dn' * h_dn * Φ_dn), inner(ψ0', H_noninteracting, ψ0),
- inner(ψ0', H_ni_up, ψ0)
- @test tr(Φ_up' * h_up * Φ_up) + tr(Φ_dn' * h_dn * Φ_dn) ≈
- inner(ψ0', H_noninteracting, ψ0) rtol = 1e-3
-
- # The total interacting Hamiltonian
- os_interacting = copy(os_noninteracting)
- #os_interacting .+= os_noninteracting
- for n in 1:N
- os_interacting .+= U, "Nupdn", n
- end
- H = MPO(os_interacting, s)
-
- # Random starting state
- ψr = randomMPS(s, n -> n ≤ Nf ? (isodd(n) ? "↑" : "↓") : "0")
- @show flux(ψr), flux(ψ0)
- #@test flux(ψr) == QN(("Nf", Nf, -1), ("Sz", 0))
- #@test flux(ψ0) == QN(("Nf", Nf, -1), ("Sz", 0))
-
- @test inner(ψ0', H, ψ0) < inner(ψr', H, ψr)
-
- sweeps = Sweeps(3)
- setmaxdim!(sweeps, 10, 20, _maxlinkdim)
- setcutoff!(sweeps, _cutoff)
- setnoise!(sweeps, 1e-5, 1e-6, 1e-7, 0.0)
- er, _ = dmrg(H, ψr, sweeps; outputlevel=0)
-
- sweeps = Sweeps(3)
- setmaxdim!(sweeps, _maxlinkdim)
- setcutoff!(sweeps, _cutoff)
- setnoise!(sweeps, 1e-5, 1e-6, 1e-7, 0.0)
- e0, _ = dmrg(H, ψ0, sweeps; outputlevel=0)
-
- @test e0 > inner(ψ0', H_noninteracting, ψ0)
- @test e0 < er
- end
-end
diff --git a/ITensorGaussianMPS/test/gmera.jl b/ITensorGaussianMPS/test/gmera.jl
deleted file mode 100644
index 9b5dd5d1dc..0000000000
--- a/ITensorGaussianMPS/test/gmera.jl
+++ /dev/null
@@ -1,176 +0,0 @@
-using ITensorGaussianMPS
-using ITensors
-using LinearAlgebra
-using Test
-
-@testset "Basic" begin
- # Test Givens rotations
- v = randn(6)
- g, r = ITensorGaussianMPS.givens_rotations(v)
- @test g * v ≈ r * [n == 1 ? 1 : 0 for n in 1:length(v)]
-end
-
-@testset "Fermion" begin
- N = 10
- Nf = N ÷ 2
-
- # Hopping
- t = 1.0
-
- # Hopping Hamiltonian
- h = Hermitian(diagm(1 => fill(-t, N - 1), -1 => fill(-t, N - 1)))
- e, u = eigen(h)
-
- @test h * u ≈ u * Diagonal(e)
-
- E = sum(e[1:Nf])
-
- # Get the Slater determinant
- Φ = u[:, 1:Nf]
- @test h * Φ ≈ Φ * Diagonal(e[1:Nf])
-
- # Diagonalize the correlation matrix as a
- # Gaussian MPS (GMPS) gates
- n, gmps = slater_determinant_to_gmera(Φ; maxblocksize=10)
-
- ns = round.(Int, n)
- @test sum(ns) == Nf
-
- Λ = conj(Φ) * transpose(Φ)
- @test gmps * Λ * gmps' ≈ Diagonal(ns) rtol = 1e-2
- @test gmps' * Diagonal(ns) * gmps ≈ Λ rtol = 1e-2
-
- # Form the MPS
- s = siteinds("Fermion", N; conserve_qns=true)
- ψ = ITensorGaussianMPS.slater_determinant_to_mera(s, Φ; maxblocksize=4)
-
- os = OpSum()
- for i in 1:N, j in 1:N
- if h[i, j] ≠ 0
- os .+= h[i, j], "Cdag", i, "C", j
- end
- end
- H = MPO(os, s)
-
- @test inner(ψ', H, ψ) ≈ E rtol = 1e-5
-
- # Compare to DMRG
- sweeps = Sweeps(10)
- setmaxdim!(sweeps, 10, 20, 40, 60)
- setcutoff!(sweeps, 1E-12)
- energy, ψ̃ = dmrg(H, productMPS(s, n -> n ≤ Nf ? "1" : "0"), sweeps; outputlevel=0)
-
- # Create an mps
- @test abs(inner(ψ, ψ̃)) ≈ 1 rtol = 1e-5
- @test inner(ψ̃', H, ψ̃) ≈ inner(ψ', H, ψ) rtol = 1e-5
- @test E ≈ energy
-end
-
-@testset "Fermion (complex)" begin
- N = 10
- Nf = N ÷ 2
-
- # Hopping
- θ = π / 8
- t = exp(im * θ)
-
- # Hopping Hamiltonian
- h = Hermitian(diagm(1 => fill(-t, N - 1), -1 => fill(-conj(t), N - 1)))
- e, u = eigen(h)
-
- @test h * u ≈ u * Diagonal(e)
-
- E = sum(e[1:Nf])
-
- # Get the Slater determinant
- Φ = u[:, 1:Nf]
- @test h * Φ ≈ Φ * Diagonal(e[1:Nf])
-
- # Diagonalize the correlation matrix as a
- # Gaussian MPS (GMPS)
- n, gmps = slater_determinant_to_gmera(Φ; maxblocksize=4)
-
- ns = round.(Int, n)
- @test sum(ns) == Nf
-
- Λ = conj(Φ) * transpose(Φ)
- @test gmps * Λ * gmps' ≈ Diagonal(ns) rtol = 1e-2
- @test gmps' * Diagonal(ns) * gmps ≈ Λ rtol = 1e-2
-
- # Form the MPS
- s = siteinds("Fermion", N; conserve_qns=true)
- ψ = ITensorGaussianMPS.slater_determinant_to_mera(s, Φ; maxblocksize=4)
-
- os = OpSum()
- for i in 1:N, j in 1:N
- if h[i, j] ≠ 0
- os .+= h[i, j], "Cdag", i, "C", j
- end
- end
- H = MPO(os, s)
-
- @test inner(ψ', H, ψ) ≈ E rtol = 1e-5
- @test inner(ψ', H, ψ) / norm(ψ) ≈ E rtol = 1e-5
-
- # Compare to DMRG
- sweeps = Sweeps(10)
- setmaxdim!(sweeps, 10, 20, 40, 60)
- setcutoff!(sweeps, 1E-12)
- energy, ψ̃ = dmrg(H, productMPS(s, n -> n ≤ Nf ? "1" : "0"), sweeps; outputlevel=0)
-
- # Create an mps
- @test abs(inner(ψ, ψ̃)) ≈ 1 rtol = 1e-5
- @test inner(ψ̃', H, ψ̃) ≈ inner(ψ', H, ψ) rtol = 1e-5
- @test E ≈ energy
-end
-
-# Build 1-d SSH model
-function SSH1dModel(N::Int, t::Float64, vardelta::Float64)
- # N should be even
- s = siteinds("Fermion", N; conserve_qns=true)
- limit = div(N - 1, 2)
- t1 = -t * (1 + vardelta / 2)
- t2 = -t * (1 - vardelta / 2)
- os = OpSum()
- for n in 1:limit
- os .+= t1, "Cdag", 2 * n - 1, "C", 2 * n
- os .+= t1, "Cdag", 2 * n, "C", 2 * n - 1
- os .+= t2, "Cdag", 2 * n, "C", 2 * n + 1
- os .+= t2, "Cdag", 2 * n + 1, "C", 2 * n
- end
- if N % 2 == 0
- os .+= t1, "Cdag", N - 1, "C", N
- os .+= t1, "Cdag", N, "C", N - 1
- end
- h = hopping_hamiltonian(os)
- H = MPO(os, s)
- #display(t1)
- return (h, H, s)
-end
-
-@testset "Energy" begin
- N = 2^4
- Nf = div(N, 2)
- t = 1.0
- gapsize = 0
- vardelta = gapsize / 2
- h, H, s = SSH1dModel(N, t, vardelta)
-
- Φ = slater_determinant_matrix(h, Nf)
- E, V = eigen(h)
- sort(E)
- Eana = sum(E[1:Nf])
-
- Λ0 = Φ * Φ'
- @test Eana ≈ tr(h * Λ0) rtol = 1e-5
- # Diagonalize the correlation matrix as a
- # Gaussian MPS (GMPS) and GMERA
- ngmps, V1 = ITensorGaussianMPS.correlation_matrix_to_gmps(Λ0; eigval_cutoff=1e-8)
- nmera, V1 = ITensorGaussianMPS.correlation_matrix_to_gmera(Λ0; eigval_cutoff=1e-8)#,maxblocksize=6)
- @test sum(round.(Int, nmera)) == sum(round.(Int, ngmps))
-
- U = ITensorGaussianMPS.UmatFromGates(V1, N)
- Etest = ITensorGaussianMPS.EfromGates(h, U)
-
- @test Eana ≈ Etest rtol = 1e-5
-end
diff --git a/ITensorGaussianMPS/test/gmps.jl b/ITensorGaussianMPS/test/gmps.jl
deleted file mode 100644
index 64adb7041f..0000000000
--- a/ITensorGaussianMPS/test/gmps.jl
+++ /dev/null
@@ -1,239 +0,0 @@
-using ITensorGaussianMPS
-using ITensors
-using LinearAlgebra
-using Test
-
-@testset "Basic" begin
- # Test Givens rotations
- v = randn(6)
- g, r = ITensorGaussianMPS.givens_rotations(v)
- @test g * v ≈ r * [n == 1 ? 1 : 0 for n in 1:length(v)]
-end
-
-@testset "Hamiltonians" begin
- N = 8
- t = -0.8 ###nearest neighbor hopping
- mu = 0.0 ###on-site chemical potential
- pairing = 1.2
- os = OpSum()
- for i in 1:N
- if 1 < i < N
- js = [i - 1, i + 1]
- elseif i == 1
- js = [i + 1]
- else
- js = [i - 1]
- end
- for j in js
- os .+= t, "Cdag", i, "C", j
- end
- end
- h_hop = ITensorGaussianMPS.hopping_hamiltonian(os)
- for i in 1:N
- if 1 < i < N
- js = [i - 1, i + 1]
- elseif i == 1
- js = [i + 1]
- else
- js = [i - 1]
- end
- for j in js
- os .+= pairing / 2.0, "Cdag", i, "Cdag", j
- os .+= -conj(pairing / 2.0), "C", i, "C", j
- end
- end
-
- h_hopandpair = ITensorGaussianMPS.quadratic_hamiltonian(os)
- h_hopandpair_spinful = ITensorGaussianMPS.quadratic_hamiltonian(os, os)
-
- @test all(
- abs.(
- (
- 2 .* ITensorGaussianMPS.reverse_interleave(Matrix(h_hopandpair))[
- (N + 1):end, (N + 1):end
- ]
- ) - h_hop
- ) .< eps(Float32),
- )
-end
-
-@testset "Fermion (real and complex)" begin
- N = 10
- Nf = N ÷ 2
-
- # Hopping
- θs = [0.0, π / 8]
- for θ in θs
- t = exp(im * θ)
-
- # Hopping Hamiltonian
- h = Hermitian(diagm(1 => fill(-t, N - 1), -1 => fill(-conj(t), N - 1)))
- if θ == 0.0
- h = real(h)
- end
- e, u = eigen(h)
-
- @test h * u ≈ u * Diagonal(e)
-
- E = sum(e[1:Nf])
-
- # Get the Slater determinant
- Φ = u[:, 1:Nf]
- @test h * Φ ≈ Φ * Diagonal(e[1:Nf])
-
- # Diagonalize the correlation matrix as a
- # Gaussian MPS (GMPS)
- n, gmps = slater_determinant_to_gmps(Φ, N; maxblocksize=4)
- ns = round.(Int, n)
- @test sum(ns) == Nf
-
- Λ = conj(Φ) * transpose(Φ)
- @test gmps * Λ * gmps' ≈ Diagonal(ns) rtol = 1e-2
- @test gmps' * Diagonal(ns) * gmps ≈ Λ rtol = 1e-2
-
- # Form the MPS
- s = siteinds("Fermion", N; conserve_qns=true)
- ψ = slater_determinant_to_mps(s, Φ; maxblocksize=4)
- os = OpSum()
- for i in 1:N, j in 1:N
- if h[i, j] ≠ 0
- os .+= h[i, j], "Cdag", i, "C", j
- end
- end
- H = MPO(os, s)
-
- @test inner(ψ', H, ψ) ≈ E rtol = 1e-5
- @test inner(ψ', H, ψ) / norm(ψ) ≈ E rtol = 1e-5
-
- # Compare to DMRG
- sweeps = Sweeps(10)
- setmaxdim!(sweeps, 10, 20, 40, 60)
- setcutoff!(sweeps, 1E-12)
- energy, ψ̃ = dmrg(H, productMPS(s, n -> n ≤ Nf ? "1" : "0"), sweeps; outputlevel=0)
-
- # Create an mps
- @test abs(inner(ψ, ψ̃)) ≈ 1 rtol = 1e-5
- @test inner(ψ̃', H, ψ̃) ≈ inner(ψ', H, ψ) rtol = 1e-5
- @test E ≈ energy
- end
-end
-
-@testset "Fermion BCS (real,real - no pairing, complex)" begin
- N = 12
- Nf = N ÷ 2
- ts = [1.0, exp(im * pi / 3.0), 1.0]
- Deltas = [1.0, 1.0, 0.0]
- for (Delta, t) in zip(Deltas, ts)
- t = isreal(t) ? real(t) : t
- os_h = OpSum()
- for n in 1:(N - 1)
- os_h .+= -t, "Cdag", n, "C", n + 1
- os_h .+= -t', "Cdag", n + 1, "C", n
- end
- os_p = OpSum()
- for n in 1:(N - 1)
- os_p .+= Delta / 2.0, "Cdag", n, "Cdag", n + 1
- os_p .+= -Delta / 2.0, "Cdag", n + 1, "Cdag", n
- os_p .+= -Delta / 2.0, "C", n, "C", n + 1
- os_p .+= Delta / 2.0, "C", n + 1, "C", n
- end
-
- h = ITensorGaussianMPS.quadratic_hamiltonian(os_h + os_p)
- @assert ishermitian(h)
- ElT = eltype(h)
- e, u = ITensorGaussianMPS.eigen_gaussian(h)
- E = sum(e[1:(N)])
- Φ = (u[:, 1:N])
- @test h * Φ ≈ Φ * Diagonal(e[1:N])
- c = conj(Φ) * transpose(Φ)
- c2 = ITensorGaussianMPS.get_gaussian_GS_corr(h)
- @test norm(c - c2) <= sqrt(eps(real(eltype(h))))
- if ElT <: Real
- @assert norm(imag.(c)) <= sqrt(eps())
- c = real.(c)
- end
- n, gmps = correlation_matrix_to_gmps(ElT.(c), N; eigval_cutoff=1e-10, maxblocksize=14)
- ns = round.(Int, n)
- @test sum(ns) == N
-
- Λ = ITensorGaussianMPS.ConservesNfParity(c)
- @test gmps * Λ.data * gmps' ≈ Diagonal(ns) rtol = 1e-2
- @test gmps' * Diagonal(ns) * gmps ≈ Λ.data rtol = 1e-2
-
- # Form the MPS
- s = siteinds("Fermion", N; conserve_qns=false)
- h_mpo = MPO(os_h + os_p, s)
- psi = correlation_matrix_to_mps(
- s, ElT.(c); eigval_cutoff=1e-10, maxblocksize=14, cutoff=1e-11
- )
- @test eltype(psi[1]) <: ElT
- sweeps = Sweeps(5)
- _maxlinkdim = 60
- _cutoff = 1e-10
- setmaxdim!(sweeps, 10, 20, 40, _maxlinkdim)
- setcutoff!(sweeps, _cutoff)
- E_dmrg, psidmrg = dmrg(h_mpo, psi, sweeps; outputlevel=0)
- E_ni_mpo = inner(psi', h_mpo, psi)
- @test E_dmrg ≈ E_ni_mpo rtol = 1e-4
- @test inner(psidmrg, psi) ≈ 1 rtol = 1e-4
-
- # compare entries of the correlation matrix
- cdagc = correlation_matrix(psi, "Cdag", "C")
- cdagcdag = correlation_matrix(psi, "Cdag", "Cdag")
- ccdag = correlation_matrix(psi, "C", "Cdag")
- cc = correlation_matrix(psi, "C", "C")
- cblocked = ITensorGaussianMPS.reverse_interleave(c)
- tol = 1e-5
- @test all(abs.(cblocked[(N + 1):end, (N + 1):end] - cdagc[:, :]) .< tol)
- @test all(abs.(cblocked[1:N, 1:N] - ccdag[:, :]) .< tol)
- @test all(abs.(cblocked[1:N, (N + 1):end] - cc[:, :]) .< tol)
- @test all(abs.(cblocked[(N + 1):end, 1:N] - cdagcdag[:, :]) .< tol)
- @show "Completed test for: ", Delta, t
- end
-end
-
-@testset "Bad Terms" begin
- @testset "Bad single" begin
- os = OpSum()
- os += -1.0, "Nupdn", 1
- @test_throws Any h_hop = ITensorGaussianMPS.hopping_hamilontian(os)
- end
- @testset "Bad quadratic" begin
- os = OpSum()
- os += -1.0, "Ntot", 1, "Ntot", 2
- @test_throws Any h_hop = ITensorGaussianMPS.hopping_hamilontian(os)
- end
-end
-
-@testset "Rewrite Hamiltonians" begin
- @testset "Spinless" begin
- os = OpSum()
- os += -1.0, "Cdag", 1, "C", 2
- os += -1.0, "Cdag", 2, "C", 1
- os += 2, "N", 1
- os += 3, "N", 2
- h_hop = ITensorGaussianMPS.hopping_hamiltonian(os)
- @test h_hop[1, 1] == 2
- @test h_hop[2, 2] == 3
- end
- @testset "Spin $o" for o in ("up", "dn")
- os = OpSum()
- os += -1.0, "Cdag$o", 1, "C$o", 2
- os += -1.0, "Cdag$o", 2, "C$o", 1
- os += 2, "N$o", 1
- os += 3, "N$o", 2
- h_hop = ITensorGaussianMPS.hopping_hamiltonian(os)
- @test h_hop[1, 1] == 2
- @test h_hop[2, 2] == 3
- end
- @testset "Spin $o" for o in ("↑", "↓")
- os = OpSum()
- os += -1.0, "c†$o", 1, "c$o", 2
- os += -1.0, "c†$o", 2, "c$o", 1
- os += 2, "n$o", 1
- os += 3, "n$o", 2
- h_hop = ITensorGaussianMPS.hopping_hamiltonian(os)
- @test h_hop[1, 1] == 2
- @test h_hop[2, 2] == 3
- end
-end
diff --git a/ITensorGaussianMPS/test/linalg.jl b/ITensorGaussianMPS/test/linalg.jl
deleted file mode 100644
index 5ea9df39cd..0000000000
--- a/ITensorGaussianMPS/test/linalg.jl
+++ /dev/null
@@ -1,36 +0,0 @@
-using ITensorGaussianMPS
-using ITensors
-using LinearAlgebra
-using Test
-const GMPS = ITensorGaussianMPS
-
-@testset "Fermionic Hamiltonian diagonalization in parity-conserving frame" begin
- N = 10
- # generate random Hamiltonian in non-number-conserving space
- H = zeros(ComplexF64, 2 * N, 2 * N)
- hoffd = rand(N, N) .- 0.5 + im * (rand(N, N) .- 0.5)
- hoffd = (hoffd - transpose(hoffd)) ./ 2
- H[1:N, (N + 1):end] = hoffd
- H[(N + 1):end, 1:N] = -conj.(hoffd)
- hd = rand(N, N) .- 0.5 + im * (rand(N, N) .- 0.5)
- hd = (hd + hd') ./ 2
- H[1:N, 1:N] = -1 .* conj.(hd)
- H[(N + 1):end, (N + 1):end] = hd
- H = (H + H') ./ 2
- # compare spectrum, which can also accurately be computed via standard eigendecomposition
- d, U = GMPS._eigen_gaussian_blocked(Hermitian(H))
- d2, _ = eigen(Hermitian(H))
- d3, _ = GMPS.eigen_gaussian(Hermitian(GMPS.interleave(H)))
- @test sort(d) ≈ sort(d2)
- @test sort(d) ≈ sort(d3)
-end
-
-@testset "Undoing arbitrary complex rotation within degenerate subspaces" begin
- A = (x -> Matrix(qr(x).Q))(randn(5, 3))
- U = (x -> Matrix(qr(x).Q))(randn(ComplexF64, 3, 3))
- AU = A * U
- B = GMPS.make_subspace_real_if_possible(AU)
- # verify that same subspace is spanned by real eigenvectors B as original eigenvectors A or AU
- @test norm(((B * B' * A) .- A)) <= eps(Float64) * 10
- @test norm(((B * B' * AU) .- AU)) <= eps(Float64) * 10
-end
diff --git a/ITensorGaussianMPS/test/runtests.jl b/ITensorGaussianMPS/test/runtests.jl
deleted file mode 100644
index 8b8982fd86..0000000000
--- a/ITensorGaussianMPS/test/runtests.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-using ITensorGaussianMPS
-using LinearAlgebra
-using Test
-
-@testset "ITensorGaussianMPS.jl" begin
- include("gmps.jl")
- include("electron.jl")
- include("linalg.jl")
-end
diff --git a/ITensorMakie/.JuliaFormatter.toml b/ITensorMakie/.JuliaFormatter.toml
deleted file mode 100644
index 08f664cdb9..0000000000
--- a/ITensorMakie/.JuliaFormatter.toml
+++ /dev/null
@@ -1,2 +0,0 @@
-style = "blue"
-indent = 2
diff --git a/ITensorMakie/LICENSE b/ITensorMakie/LICENSE
deleted file mode 100644
index 555297e50a..0000000000
--- a/ITensorMakie/LICENSE
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
diff --git a/ITensorMakie/NEWS.md b/ITensorMakie/NEWS.md
deleted file mode 100644
index 98867f07a2..0000000000
--- a/ITensorMakie/NEWS.md
+++ /dev/null
@@ -1,44 +0,0 @@
-This file is a (mostly) comprehensive list of changes made in each release of ITensorMakie.jl. For a completely comprehensive but more verbose list, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main/ITensorMakie).
-
-While we are in v0.x of the package, we will follow the convention that updating from v0.x.y to v0.x.(y+1) (for example v0.1.15 to v0.1.16) should not break your code, unless you are using internal/undocumented features of the code, while updating from `v0.x.y` to `v0.(x+1).y` might break your code, though we will try to add deprecation warnings when possible, such as for simple cases where the name of a function changes.
-
-Note that as of Julia v1.5, in order to see deprecation warnings you will need to start Julia with `julia --depwarn=yes` (previously they were on by default). Please run your code like this before upgrading between minor versions of the code (for example from v0.1.41 to v0.2.0).
-
-After we release v1 of the package, we will start following [semantic versioning](https://semver.org).
-
-
-ITensorMakie v0.1.3 Release Notes
-=================================
-
-Bugs:
-
-Enhancements:
-
-- Bump GraphMakie compat to 0.5 (#1032)
-
-ITensorMakie v0.1.2 Release Notes
-=================================
-
-Bugs:
-
-Enhancements:
-
-- Update compats (#1031)
-
-ITensorMakie v0.1.1 Release Notes
-=================================
-
-Bugs:
-
-Enhancements:
-
-- Disable rotating edge labels with edge (#832)
-
-ITensorMakie v0.1.0 Release Notes
-=================================
-
-Bugs:
-
-Enhancements:
-
-- Register ITensorMakie package, code in ITensors.jl repository
diff --git a/ITensorMakie/Project.toml b/ITensorMakie/Project.toml
deleted file mode 100644
index 17018b71fe..0000000000
--- a/ITensorMakie/Project.toml
+++ /dev/null
@@ -1,21 +0,0 @@
-name = "ITensorMakie"
-uuid = "72ca75eb-df6f-4d6b-80c5-d5eab17be3f9"
-authors = ["Matthew Fishman "]
-version = "0.1.3"
-
-[deps]
-Colors = "5ae59095-9a9b-59fe-a467-6f913c188581"
-GraphMakie = "1ecd5474-83a3-4783-bb4f-06765db800d2"
-Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
-ITensorVisualizationBase = "cd2553d2-8bef-4d93-8a38-c62f17d5ad23"
-NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
-Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
-
-[compat]
-Colors = "0.12.8"
-GraphMakie = "0.3.0, 0.4, 0.5"
-Graphs = "1.4.1"
-ITensorVisualizationBase = "0.1.5"
-NetworkLayout = "0.4.3"
-Reexport = "1.2.2"
-julia = "1.6"
diff --git a/ITensorMakie/src/ITensorMakie.jl b/ITensorMakie/src/ITensorMakie.jl
deleted file mode 100644
index 31cf6d5148..0000000000
--- a/ITensorMakie/src/ITensorMakie.jl
+++ /dev/null
@@ -1,174 +0,0 @@
-module ITensorMakie
-
-using Colors
-using Graphs
-using NetworkLayout
-using Reexport
-using GraphMakie
-@reexport using ITensorVisualizationBase
-
-using GraphMakie.Makie:
- Makie,
- Figure,
- contents,
- hidedecorations!,
- hidespines!,
- deregister_interaction!,
- register_interaction!
-
-using ITensorVisualizationBase:
- @Backend_str,
- default_vertex_labels,
- default_vertex_labels_prefix,
- default_vertex_size,
- default_vertex_textsize,
- default_edge_textsize,
- default_edge_widths,
- default_edge_labels,
- default_arrow_show,
- default_arrow_size,
- default_siteinds_direction,
- is_self_loop,
- _ndims
-
-import ITensorVisualizationBase: visualize, visualize!, _graphplot
-
-function __init__()
- return ITensorVisualizationBase.set_backend!(Backend"Makie"())
-end
-
-fill_number(a::AbstractVector, n::Integer) = a
-fill_number(x::Number, n::Integer) = fill(x, n)
-
-function visualize(b::Backend"Makie", g::AbstractGraph; kwargs...)
- f = Figure()
- visualize!(b, f[1, 1], g; kwargs...)
- return f
-end
-
-function visualize!(b::Backend"Makie", f::Figure, g::AbstractGraph; kwargs...)
- visualize!(b, f[1, 1], g; kwargs...)
- return f
-end
-
-function visualize!(
- b::Backend"Makie",
- f::Makie.GridPosition,
- g::AbstractGraph;
- interactive=true,
- ndims=2,
- layout=Spring(; dim=ndims),
-
- # vertex
- vertex_labels_prefix=default_vertex_labels_prefix(b, g),
- vertex_labels=default_vertex_labels(b, g, vertex_labels_prefix),
- vertex_size=default_vertex_size(b, g),
- vertex_textsize=default_vertex_textsize(b, g),
-
- # edge
- edge_textsize=default_edge_textsize(b),
- edge_widths=default_edge_widths(b, g),
- edge_labels=default_edge_labels(b, g),
-
- # arrow
- arrow_show=default_arrow_show(b, g),
- arrow_size=default_arrow_size(b, g),
- siteinds_direction=default_siteinds_direction(b, g),
-)
- if ismissing(Makie.current_backend())
- error(
- """
- You have not loaded a backend. Please load one (`using GLMakie` or `using CairoMakie`)
- before trying to visualize a graph.
-"""
- )
- end
-
- edge_labels = ITensorVisualizationBase.edge_labels(b, edge_labels, g)
-
- if length(vertex_labels) ≠ nv(g)
- throw(
- DimensionMismatch(
- "$(length(vertex_labels)) vertex labels $(vertex_labels) were specified
- but there are $(nv(g)) tensors in the diagram, please specify the
- correct number of labels."
- ),
- )
- end
-
- graphplot_kwargs = (;
- layout=layout,
-
- # vertex
- node_size=fill_number(vertex_size, nv(g)),
- node_color=colorant"lightblue1", # TODO: store in vertex, make a default
- node_marker='●', # TODO: allow other options, like '◼'
- node_attr=(; strokecolor=:black, strokewidth=3),
-
- # vertex labels
- nlabels=vertex_labels,
- nlabels_textsize=vertex_textsize,
- nlabels_color=colorant"black",
- nlabels_align=(:center, :center),
-
- # edge
- edge_width=edge_widths,
- edge_color=colorant"black",
-
- # edge labels
- elabels=edge_labels,
- elabels_textsize=edge_textsize,
- elabels_color=colorant"red",
-
- # self-edge
- selfedge_width=1e-5, # Small enough so you can't see the loop, big enough for site label to show up
- selfedge_direction=siteinds_direction,
- selfedge_size=3,
-
- # arrow
- arrow_show=arrow_show,
- arrow_size=arrow_size,
- arrow_shift=0.49,
- )
-
- overwrite_axis = false
- if isempty(contents(f))
- axis_plot = graphplot(f, g; graphplot_kwargs...)
- else
- @warn "Visualizing a graph in the same axis as an existing graph. This
- feature is experimental and some features like interactivity might now work"
- overwrite_axis = true
- graphplot!(f, g; graphplot_kwargs...)
- end
-
- # Set rotation of edge labels to 0.
- axis_plot.plot.elabels_rotation[] = zeros(ne(g))
-
- if !overwrite_axis && (_ndims(layout) == 2)
- hidedecorations!(axis_plot.axis)
- # This would hide the box around the plot
- # TODO: make this optional
- #hidespines!(axis_plot.axis)
- if interactive
- deregister_interaction!(axis_plot.axis, :rectanglezoom)
- register_interaction!(axis_plot.axis, :nhover, NodeHoverHighlight(axis_plot.plot))
- register_interaction!(axis_plot.axis, :ehover, EdgeHoverHighlight(axis_plot.plot))
- register_interaction!(axis_plot.axis, :ndrag, NodeDrag(axis_plot.plot))
- register_interaction!(axis_plot.axis, :edrag, EdgeDrag(axis_plot.plot))
- end
- end
- return f
-end
-
-# For use in sequence visualization.
-# TODO: Make this more generalizable to other backends.
-function _graphplot(::Backend"Makie", graph; all_labels)
- fig, ax, plt = graphplot(
- reverse(graph); arrow_show=false, nlabels=all_labels, layout=Buchheim()
- )
- hidedecorations!(ax)
- #hidespines!(ax)
- return fig
-end
-
-end
diff --git a/ITensorMakie/test/Project.toml b/ITensorMakie/test/Project.toml
deleted file mode 100644
index 8c33cc0e18..0000000000
--- a/ITensorMakie/test/Project.toml
+++ /dev/null
@@ -1,10 +0,0 @@
-[deps]
-GLMakie = "e9467ef8-e4e7-5192-8a1a-b1aee30e663a"
-GeometryBasics = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
-Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LayeredLayouts = "f4a74d36-062a-4d48-97cd-1356bad1de4e"
-NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
-Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
-ReferenceTests = "324d217c-45ce-50fc-942e-d289b448e8cf"
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
diff --git a/ITensorMakie/test/references/R.png b/ITensorMakie/test/references/R.png
deleted file mode 100644
index 0b5f829f2c..0000000000
Binary files a/ITensorMakie/test/references/R.png and /dev/null differ
diff --git a/ITensorMakie/test/references/R.txt b/ITensorMakie/test/references/R.txt
deleted file mode 100644
index bf88cd6f00..0000000000
--- a/ITensorMakie/test/references/R.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ERn2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⡇⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀20⠀⠀⠀⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀20⠊⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀10⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣉hn2⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠜⠀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤2⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠀⢀⠔⢹⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ψn1n2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⠀2⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠔⠁⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢄⡀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢈⣑hn1⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⢀⣀⣀⣀⣀⡠⠤⠤10⠤⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀ELn0⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorMakie/test/references/R1.png b/ITensorMakie/test/references/R1.png
deleted file mode 100644
index 42e8defb8a..0000000000
Binary files a/ITensorMakie/test/references/R1.png and /dev/null differ
diff --git a/ITensorMakie/test/references/R1.txt b/ITensorMakie/test/references/R1.txt
deleted file mode 100644
index 4ac7548305..0000000000
--- a/ITensorMakie/test/references/R1.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀ELn0⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⣷⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒⠒⠤⠤10⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀20⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤hn1⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠁⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀20⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀2⊗10⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ψn1n2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorMakie/test/references/R2.png b/ITensorMakie/test/references/R2.png
deleted file mode 100644
index 69f5e8bcf5..0000000000
Binary files a/ITensorMakie/test/references/R2.png and /dev/null differ
diff --git a/ITensorMakie/test/references/R2.txt b/ITensorMakie/test/references/R2.txt
deleted file mode 100644
index 30dabf0db8..0000000000
--- a/ITensorMakie/test/references/R2.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀T1⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⣷⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒⠒⠤⠤20⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀20⊗2⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤T3⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠁⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀2⊗10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀20⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀T2⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorMakie/test/references/grid.png b/ITensorMakie/test/references/grid.png
deleted file mode 100644
index 2e52dbfd91..0000000000
Binary files a/ITensorMakie/test/references/grid.png and /dev/null differ
diff --git a/ITensorMakie/test/references/tn.png b/ITensorMakie/test/references/tn.png
deleted file mode 100644
index 07ece15c93..0000000000
Binary files a/ITensorMakie/test/references/tn.png and /dev/null differ
diff --git a/ITensorMakie/test/references/tn.txt b/ITensorMakie/test/references/tn.txt
deleted file mode 100644
index 863ad1c718..0000000000
--- a/ITensorMakie/test/references/tn.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀tn₅⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⡇⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀20⠀⠀⠀⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀20⠊⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀10⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣉tn₄⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠜⠀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤2⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠀⢀⠔⢹⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀tn₂⣉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⠀2⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠔⠁⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢄⡀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢈⣑tn₃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⢀⣀⣀⣀⣀⡠⠤⠤10⠤⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀tn₁⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorMakie/test/runtests.jl b/ITensorMakie/test/runtests.jl
deleted file mode 100644
index 251cc6cfa8..0000000000
--- a/ITensorMakie/test/runtests.jl
+++ /dev/null
@@ -1,14 +0,0 @@
-using ITensors
-using ITensorMakie
-using Test
-
-starts_and_ends_with(file, st, en) = startswith(file, st) && endswith(file, en)
-starts_and_ends_with(st, en) = file -> starts_and_ends_with(file, st, en)
-
-test_path = joinpath(@__DIR__)
-test_files = filter(starts_and_ends_with("test_", ".jl"), readdir(test_path))
-@testset "ITensorMakie.jl" for file in test_files
- file_path = joinpath(test_path, file)
- println("Running test $(file_path)")
- include(file_path)
-end
diff --git a/ITensorMakie/test/test_basics.jl b/ITensorMakie/test/test_basics.jl
deleted file mode 100644
index 7fa11c6f3b..0000000000
--- a/ITensorMakie/test/test_basics.jl
+++ /dev/null
@@ -1,54 +0,0 @@
-using ITensors
-using ITensorMakie
-using GLMakie
-using ReferenceTests
-using Test
-
-@testset "Basic test for ITensorMakie" begin
- extension = "png"
-
- N = 10
- s(n) = Index([QN("Sz", 0) => 1, QN("Sz", 1) => 1]; tags="S=1/2,Site,n=$n")
- l(n) = Index([QN("Sz", 0) => 10, QN("Sz", 1) => 10]; tags="Link,l=$n")
- h(n) = Index([QN("Sz", 0) => 5, QN("Sz", 1) => 5]; tags="ham,Link,l=$n")
- s⃗ = [s(n) for n in 1:N]
- l⃗ = [l(n) for n in 1:(N - 1)]
- h⃗ = [h(n) for n in 1:(N - 1)]
-
- # Add some more indices between two of the tensors
- x = Index([QN("Sz", 0) => 2]; tags="X")
- y = Index([QN("Sz", 0) => 2]; tags="Y")
-
- n = 2
- ψn1n2 = randomITensor(l⃗[n - 1], s⃗[n], s⃗[n + 1], l⃗[n + 1], dag(x), dag(y))
- hn1 = randomITensor(dag(h⃗[n - 1]), s⃗[n]', dag(s⃗[n]), h⃗[n], x, y)
- hn2 = randomITensor(dag(h⃗[n]), s⃗[n + 1]', dag(s⃗[n + 1]), h⃗[n + 1])
- ELn0 = randomITensor(l⃗[n - 1]', h⃗[n - 1], dag(l⃗[n - 1]))
- ERn2 = randomITensor(l⃗[n + 1]', dag(h⃗[n + 1]), dag(l⃗[n + 1]))
-
- tn = [ELn0, ψn1n2, hn1, hn2, ERn2]
-
- R = @visualize figR ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R1 = @visualize figR1 ELn0 * ψn1n2 * hn1
- R2 = @visualize figR2 R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
-
- fig_tn = @visualize_noeval tn
-
- by = extension == "png" ? psnr_equality(0.5) : isequal
-
- @test_reference "references/R.$extension" figR by = by
- @test_reference "references/R1.$extension" figR1 by = by
- @test_reference "references/R2.$extension" figR2 by = by
- @test_reference "references/tn.$extension" fig_tn by = by
-
- R = @visualize fig_grid ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R1 = @visualize! fig_grid[1, 2] ELn0 * ψn1n2 * hn1
- R2 = @visualize! fig_grid[2, 1] R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
- @visualize_noeval! fig_grid[2, 2] tn
-
- # XXX: Broken, passes locally but fails on CI with:
- # Warning: test fails because PSNR -0.6602330207824707 < 1
- #@test_reference "references/grid.$extension" fig_grid by=by
-
- @test_throws DimensionMismatch @visualize fig R1 * hn2 * ERn2 vertex_labels = ["T1", "T2"]
-end
diff --git a/ITensorUnicodePlots/.JuliaFormatter.toml b/ITensorUnicodePlots/.JuliaFormatter.toml
deleted file mode 100644
index 08f664cdb9..0000000000
--- a/ITensorUnicodePlots/.JuliaFormatter.toml
+++ /dev/null
@@ -1,2 +0,0 @@
-style = "blue"
-indent = 2
diff --git a/ITensorUnicodePlots/LICENSE b/ITensorUnicodePlots/LICENSE
deleted file mode 100644
index 555297e50a..0000000000
--- a/ITensorUnicodePlots/LICENSE
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
diff --git a/ITensorUnicodePlots/NEWS.md b/ITensorUnicodePlots/NEWS.md
deleted file mode 100644
index d741131e01..0000000000
--- a/ITensorUnicodePlots/NEWS.md
+++ /dev/null
@@ -1,35 +0,0 @@
-This file is a (mostly) comprehensive list of changes made in each release of ITensorUnicodePlots.jl. For a completely comprehensive but more verbose list, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main/ITensorUnicodePlots).
-
-While we are in v0.x of the package, we will follow the convention that updating from v0.x.y to v0.x.(y+1) (for example v0.1.15 to v0.1.16) should not break your code, unless you are using internal/undocumented features of the code, while updating from `v0.x.y` to `v0.(x+1).y` might break your code, though we will try to add deprecation warnings when possible, such as for simple cases where the name of a function changes.
-
-Note that as of Julia v1.5, in order to see deprecation warnings you will need to start Julia with `julia --depwarn=yes` (previously they were on by default). Please run your code like this before upgrading between minor versions of the code (for example from v0.1.41 to v0.2.0).
-
-After we release v1 of the package, we will start following [semantic versioning](https://semver.org).
-
-ITensorUnicodePlots v0.1.3 Release Notes
-========================================
-
-Bugs
-
-Enhancements
-
-- Update compats (#1031)
-
-ITensorUnicodePlots v0.1.2 Release Notes
-========================================
-
-Bugs
-
-Enhancements
-
-- Drop explicit dependency on ITensors
-
-ITensorUnicodePlots v0.1.1 Release Notes
-========================================
-
-- Remove newlines from unicode visualization (#819)
-
-ITensorUnicodePlots v0.1.0 Release Notes
-========================================
-
-- Register ITensorUnicodePlots package, code in ITensors.jl repository
diff --git a/ITensorUnicodePlots/Project.toml b/ITensorUnicodePlots/Project.toml
deleted file mode 100644
index 14b2d5ea48..0000000000
--- a/ITensorUnicodePlots/Project.toml
+++ /dev/null
@@ -1,28 +0,0 @@
-name = "ITensorUnicodePlots"
-uuid = "73163f41-4a9e-479f-8353-73bf94dbd758"
-authors = ["Matthew Fishman "]
-version = "0.1.3"
-
-[deps]
-Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
-ITensorVisualizationBase = "cd2553d2-8bef-4d93-8a38-c62f17d5ad23"
-LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
-NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
-Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
-Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
-UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228"
-
-[compat]
-Graphs = "1.4.1"
-ITensorVisualizationBase = "0.1.5"
-NetworkLayout = "0.4.3"
-Reexport = "1.2.2"
-Statistics = "1"
-UnicodePlots = "2.5.0, 3"
-julia = "1.6"
-
-[extras]
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
-
-[targets]
-test = ["Test"]
diff --git a/ITensorUnicodePlots/examples/ex_2d_tensor_network_layered.jl b/ITensorUnicodePlots/examples/ex_2d_tensor_network_layered.jl
deleted file mode 100644
index 25f39b6ace..0000000000
--- a/ITensorUnicodePlots/examples/ex_2d_tensor_network_layered.jl
+++ /dev/null
@@ -1,10 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-using Graphs
-using LayeredLayouts
-
-tn = itensornetwork(grid((4, 4)); linkspaces=3)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true layout = layout
-
-fig
diff --git a/ITensorUnicodePlots/examples/ex_dmrg.jl b/ITensorUnicodePlots/examples/ex_dmrg.jl
deleted file mode 100644
index 5ca72b129f..0000000000
--- a/ITensorUnicodePlots/examples/ex_dmrg.jl
+++ /dev/null
@@ -1,34 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-
-N = 10
-sites(n) = Index([QN("Sz", 0) => 1, QN("Sz", 1) => 1]; tags="S=1/2,Site,n=$n")
-l(n) = Index([QN("Sz", 0) => 10, QN("Sz", 1) => 10]; tags="Link,l=$n")
-h(n) = Index([QN("Sz", 0) => 5, QN("Sz", 1) => 5]; tags="ham,Link,l=$n")
-s⃗ = [sites(n) for n in 1:N]
-l⃗ = [l(n) for n in 1:(N - 1)]
-h⃗ = [h(n) for n in 1:(N - 1)]
-
-# Add some more indices between two of the tensors
-x = Index([QN("Sz", 0) => 2]; tags="X")
-y = Index([QN("Sz", 0) => 2]; tags="Y")
-
-n = 2
-ψn1n2 = randomITensor(l⃗[n - 1], s⃗[n], s⃗[n + 1], l⃗[n + 1], dag(x), dag(y))
-hn1 = randomITensor(dag(h⃗[n - 1]), s⃗[n]', dag(s⃗[n]), h⃗[n], x, y)
-hn2 = randomITensor(dag(h⃗[n]), s⃗[n + 1]', dag(s⃗[n + 1]), h⃗[n + 1])
-ELn0 = randomITensor(l⃗[n - 1]', h⃗[n - 1], dag(l⃗[n - 1]))
-ERn2 = randomITensor(l⃗[n + 1]', dag(h⃗[n + 1]), dag(l⃗[n + 1]))
-
-edge_labels = (; plevs=true)
-
-R = @visualize fig1 ELn0 * ψn1n2 * hn1 * hn2 * ERn2 edge_labels = edge_labels vertex_size =
- 50
-@show R ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
-
-# Split it up into multiple contractions
-R1 = @visualize fig2 ELn0 * ψn1n2 * hn1 edge_labels = edge_labels vertex_size = 50
-R2 = @visualize fig3 R1 * hn2 * ERn2 edge_labels = edge_labels vertex_size = 50
-@show R2 ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
-
-fig1, fig2, fig3
diff --git a/ITensorUnicodePlots/examples/ex_grid_layout.jl b/ITensorUnicodePlots/examples/ex_grid_layout.jl
deleted file mode 100644
index 79fe9447d5..0000000000
--- a/ITensorUnicodePlots/examples/ex_grid_layout.jl
+++ /dev/null
@@ -1,13 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-using Graphs
-using GeometryBasics
-using NetworkLayout
-
-N = 10
-g = grid((N,))
-tn = itensornetwork(g; linkspaces=10, sitespaces=2)
-@visualize fig tn siteinds_direction = Point(1, -0.5) layout = SquareGrid(; cols=1) width =
- 20 height = 50
-
-fig
diff --git a/ITensorUnicodePlots/examples/ex_itensor_graph_unicode.jl b/ITensorUnicodePlots/examples/ex_itensor_graph_unicode.jl
deleted file mode 100644
index 7eb0dceee8..0000000000
--- a/ITensorUnicodePlots/examples/ex_itensor_graph_unicode.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-using Graphs
-
-g = grid((5,))
-tn = itensornetwork(g; linkspaces=10, sitespaces=2)
-@visualize fig tn
-
-fig
diff --git a/ITensorUnicodePlots/examples/ex_qn_mps.jl b/ITensorUnicodePlots/examples/ex_qn_mps.jl
deleted file mode 100644
index 11a3b59b92..0000000000
--- a/ITensorUnicodePlots/examples/ex_qn_mps.jl
+++ /dev/null
@@ -1,13 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-
-s = siteinds("S=1/2", 5; conserve_qns=true)
-ψ = randomMPS(s, n -> isodd(n) ? "↑" : "↓"; linkdims=2)
-ψ = orthogonalize(ψ, 2)
-ψdag = prime(linkinds, dag(ψ))
-tn = [ψ..., ψdag...]
-
-edge_labels = (; plevs=true)
-@visualize fig tn edge_labels = edge_labels edge_textsize = 20 width = 100
-
-fig
diff --git a/ITensorUnicodePlots/examples/ex_quantum_circuit.jl b/ITensorUnicodePlots/examples/ex_quantum_circuit.jl
deleted file mode 100644
index 4182f7c914..0000000000
--- a/ITensorUnicodePlots/examples/ex_quantum_circuit.jl
+++ /dev/null
@@ -1,33 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-using Graphs
-using LayeredLayouts
-
-N = 6
-layers = 6
-ndelete = 0
-
-s = siteinds("Qubit", N)
-layer(N, start) = [("CX", i, i + 1) for i in start:2:(N - 1)]
-layer(N) = append!(layer(N, 1), layer(N, 2))
-layer_N = layer(N)
-gates = []
-for _ in 1:layers
- append!(gates, layer_N)
-end
-
-for _ in 1:ndelete
- deleteat!(gates, rand(eachindex(gates)))
-end
-
-U, s̃ = circuit_network(gates, s)
-ψ = prod(MPS(s))
-ψ̃ = prod(MPS(s̃))
-tn = [ψ, U..., ψ̃]
-
-edge_labels = (; plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels layout = layout width = 90 height =
- 40
-
-fig
diff --git a/ITensorUnicodePlots/examples/notest_ex_qft_circuit.jl b/ITensorUnicodePlots/examples/notest_ex_qft_circuit.jl
deleted file mode 100644
index 865b7e6fe6..0000000000
--- a/ITensorUnicodePlots/examples/notest_ex_qft_circuit.jl
+++ /dev/null
@@ -1,22 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-using Graphs
-using LayeredLayouts
-using PastaQ: qft
-
-N = 4
-gates = qft(N)
-
-s = siteinds("Qubit", N)
-
-U, s̃ = circuit_network(gates, s)
-ψ = MPS(s)
-ψ̃ = MPS(s̃)
-tn = [ψ..., U..., ψ̃...]
-
-edge_labels = (; tags=true, plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels edge_textsize = 20 layout =
- layout width = 100 height = 50
-
-fig
diff --git a/ITensorUnicodePlots/src/ITensorUnicodePlots.jl b/ITensorUnicodePlots/src/ITensorUnicodePlots.jl
deleted file mode 100644
index f6cbc4f5db..0000000000
--- a/ITensorUnicodePlots/src/ITensorUnicodePlots.jl
+++ /dev/null
@@ -1,148 +0,0 @@
-module ITensorUnicodePlots
-
-using Graphs
-using NetworkLayout
-using Reexport
-using Statistics
-@reexport using ITensorVisualizationBase
-
-using ITensorVisualizationBase:
- @Backend_str,
- default_vertex_labels,
- default_vertex_labels_prefix,
- default_vertex_size,
- default_vertex_textsize,
- default_edge_textsize,
- default_edge_widths,
- default_edge_labels,
- default_arrow_show,
- default_arrow_size,
- default_siteinds_direction,
- is_self_loop
-
-using UnicodePlots: UnicodePlots
-
-import ITensorVisualizationBase: visualize, default_newlines
-
-function __init__()
- return ITensorVisualizationBase.set_backend!(Backend"UnicodePlots"())
-end
-
-function plot(::Backend"UnicodePlots"; xlim, ylim, width, height)
- plot = UnicodePlots.lineplot(
- [0.0],
- [0.0];
- border=:none,
- labels=false,
- grid=false,
- xlim=xlim,
- ylim=ylim,
- width=width,
- height=height,
- )
- return plot
-end
-
-function draw_edge!(b::Backend"UnicodePlots", plot, v1, v2; color)
- UnicodePlots.lineplot!(plot, [v1[1], v2[1]], [v1[2], v2[2]]; color=color)
- return plot
-end
-
-function annotate!(::Backend"UnicodePlots", plot, x, y, str)
- UnicodePlots.annotate!(plot, x, y, str)
- return plot
-end
-
-# Don't use new lines by default, it messes up the formatting
-default_newlines(::Backend"UnicodePlots") = false
-
-function visualize(
- b::Backend"UnicodePlots",
- g::AbstractGraph;
- interactive=false, # TODO: change to `default_interactive(b)`
- ndims=2, # TODO: change to `default_ndims(b)`
- layout=Spring(; dim=ndims), # TODO: change to `default_layout(b, ndims)`
-
- # vertex
- vertex_labels_prefix=default_vertex_labels_prefix(b, g),
- vertex_labels=default_vertex_labels(b, g, vertex_labels_prefix),
- vertex_size=default_vertex_size(b, g),
- vertex_textsize=default_vertex_textsize(b, g),
-
- # edge
- edge_textsize=default_edge_textsize(b),
- edge_widths=default_edge_widths(b, g),
- edge_labels=default_edge_labels(b, g),
-
- # arrow
- arrow_show=default_arrow_show(b, g),
- arrow_size=default_arrow_size(b, g),
-
- # siteinds direction
- siteinds_direction=default_siteinds_direction(b, g),
- width=50,
- height=20,
-)
- edge_color = :blue # TODO: make into keyword argument
-
- edge_labels = ITensorVisualizationBase.edge_labels(b, edge_labels, g)
-
- node_pos = layout(g)
- edge_pos = [node_pos[src(edge)] => node_pos[dst(edge)] for edge in edges(g)]
- xmin = minimum(first.(node_pos))
- xmax = maximum(first.(node_pos))
- ymin = minimum(last.(node_pos))
- ymax = maximum(last.(node_pos))
-
- #vertex_size = vertex_size * (xmax - xmin)
-
- xscale = 0.1 * (xmax - xmin)
- yscale = max(0.3 * (ymax - ymin), 0.1 * xscale)
- xlim = [xmin - xscale, xmax + xscale]
- ylim = [ymin - yscale, ymax + yscale]
-
- site_vertex_shift = siteinds_direction
-
- #site_vertex_shift = -Point(0, 0.2 * abs(ylim[2] - ylim[1]))
- #site_vertex_shift = -Point(0, 0.001 * (xmax - xmin))
-
- # Initialize the plot
- plt = plot(b; xlim=xlim, ylim=ylim, width=width, height=height)
-
- # Add edges and nodes
- for (e_pos, e) in zip(edge_pos, edges(g))
- if is_self_loop(e)
- draw_edge!(b, plt, e_pos[1], e_pos[1] + site_vertex_shift; color=edge_color)
- else
- draw_edge!(b, plt, e_pos[1], e_pos[2]; color=edge_color)
- end
- end
-
- # Add edge labels and node labels
- for (n, e) in enumerate(edges(g))
- e_pos = edge_pos[n]
- edge_label = edge_labels[n]
- if is_self_loop(e)
- @assert e_pos[1] == e_pos[2]
- str_pos = e_pos[1] + 0.5 * site_vertex_shift
- annotate!(b, plt, str_pos..., edge_label)
- else
- annotate!(b, plt, mean(e_pos)..., edge_label)
- end
- end
- if length(vertex_labels) ≠ nv(g)
- throw(
- DimensionMismatch("Number of vertex labels must equal the number of vertices. Vertex
- labels $(vertex_labels) of length $(length(vertex_labels)) does not
- equal the number of vertices $(nv(g)).")
- )
- end
- for v in vertices(g)
- x, y = node_pos[v]
- node_label = vertex_labels[v]
- annotate!(b, plt, x, y, node_label)
- end
- return plt
-end
-
-end
diff --git a/ITensorUnicodePlots/test/Project.toml b/ITensorUnicodePlots/test/Project.toml
deleted file mode 100644
index 7362f55e92..0000000000
--- a/ITensorUnicodePlots/test/Project.toml
+++ /dev/null
@@ -1,8 +0,0 @@
-[deps]
-GeometryBasics = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
-Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LayeredLayouts = "f4a74d36-062a-4d48-97cd-1356bad1de4e"
-NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
-ReferenceTests = "324d217c-45ce-50fc-942e-d289b448e8cf"
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
diff --git a/ITensorUnicodePlots/test/references/R.png b/ITensorUnicodePlots/test/references/R.png
deleted file mode 100644
index 0b5f829f2c..0000000000
Binary files a/ITensorUnicodePlots/test/references/R.png and /dev/null differ
diff --git a/ITensorUnicodePlots/test/references/R.txt b/ITensorUnicodePlots/test/references/R.txt
deleted file mode 100644
index fe65a0915a..0000000000
--- a/ITensorUnicodePlots/test/references/R.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ERn2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⡇⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊(20)'⠀⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀20⠊⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀10⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣉hn2⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠜⠀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤2⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠀⢀⠔⢹⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ψn1n2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔(2)'⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠔⠁⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢄⡀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢈⣑hn1⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⢀⣀⣀⣀⣀⡠⠤⠤10⠤⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀ELn0⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(2)'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀(20)'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorUnicodePlots/test/references/R1.png b/ITensorUnicodePlots/test/references/R1.png
deleted file mode 100644
index 42e8defb8a..0000000000
Binary files a/ITensorUnicodePlots/test/references/R1.png and /dev/null differ
diff --git a/ITensorUnicodePlots/test/references/R1.txt b/ITensorUnicodePlots/test/references/R1.txt
deleted file mode 100644
index c536db9979..0000000000
--- a/ITensorUnicodePlots/test/references/R1.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀ELn0⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⣷⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒⠒⠤⠤10⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀(20)'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤hn1⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠁⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀20⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀(2)'⊗10⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ψn1n2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorUnicodePlots/test/references/R2.png b/ITensorUnicodePlots/test/references/R2.png
deleted file mode 100644
index 69f5e8bcf5..0000000000
Binary files a/ITensorUnicodePlots/test/references/R2.png and /dev/null differ
diff --git a/ITensorUnicodePlots/test/references/R2.txt b/ITensorUnicodePlots/test/references/R2.txt
deleted file mode 100644
index ee0290e67a..0000000000
--- a/ITensorUnicodePlots/test/references/R2.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀T1⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⣷⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒⠒⠤⠤20⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀20)'⊗(2)'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤T3⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠁⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀2⊗10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀(20)'⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀T2⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(2)'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorUnicodePlots/test/references/R_tags.txt b/ITensorUnicodePlots/test/references/R_tags.txt
deleted file mode 100644
index 96feb00b55..0000000000
--- a/ITensorUnicodePlots/test/references/R_tags.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ERn2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⡇⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(20|"Link,l=3")'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(20|"Link,l(10|"Link,ham,l=3")⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣉hn2⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠜⠀(2|"S=1/2,Site,n=3")⠉⠀⢀⠔⢹⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ψn1n2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(2|"S=1/2,Site,⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀(10|"Link,ham,l=2")⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀(2|"S=1/2,Site,n=2")⊗(2|"X")⊗(2|"Y")⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀(20|"Link,l=1")⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢄⡀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢈⣑hn1⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀(10|"Link,ham,l=1")⠉⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀ELn0⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(2|"S=1/2,Site,n=2")'⠀⠀⠀⠀⠀⠀⠀
- ⠀"Link,l=1")'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorUnicodePlots/test/references/T.txt b/ITensorUnicodePlots/test/references/T.txt
deleted file mode 100644
index 3f40127f59..0000000000
--- a/ITensorUnicodePlots/test/references/T.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ELn0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(20)'⊗10⊗20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorUnicodePlots/test/references/grid.png b/ITensorUnicodePlots/test/references/grid.png
deleted file mode 100644
index 2e52dbfd91..0000000000
Binary files a/ITensorUnicodePlots/test/references/grid.png and /dev/null differ
diff --git a/ITensorUnicodePlots/test/references/tn.png b/ITensorUnicodePlots/test/references/tn.png
deleted file mode 100644
index 07ece15c93..0000000000
Binary files a/ITensorUnicodePlots/test/references/tn.png and /dev/null differ
diff --git a/ITensorUnicodePlots/test/references/tn.txt b/ITensorUnicodePlots/test/references/tn.txt
deleted file mode 100644
index 11b56f4640..0000000000
--- a/ITensorUnicodePlots/test/references/tn.txt
+++ /dev/null
@@ -1,22 +0,0 @@
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀tn₅⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔⠁⡇⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊(20)'⠀⠑⠢⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀20⠊⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀10⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣉tn₄⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠜⠀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤2⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠀⢀⠔⢹⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀tn₂⣉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠔(2)'⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠉⠢⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠔⠁⠀⠀⠀⢸⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀2⊗2⊗2⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢄⡀⠀⠀⠀⠀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢈⣑tn₃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⡠⠊⠀⠀⢀⣀⣀⣀⣀⡠⠤⠤10⠤⠒⠒⠒⠒⠒⠉⠉⠉⠉⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀tn₁⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(2)'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀(20)'⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
\ No newline at end of file
diff --git a/ITensorUnicodePlots/test/runtests.jl b/ITensorUnicodePlots/test/runtests.jl
deleted file mode 100644
index bfce2bffaa..0000000000
--- a/ITensorUnicodePlots/test/runtests.jl
+++ /dev/null
@@ -1,14 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-using Test
-
-starts_and_ends_with(file, st, en) = startswith(file, st) && endswith(file, en)
-starts_and_ends_with(st, en) = file -> starts_and_ends_with(file, st, en)
-
-test_path = joinpath(@__DIR__)
-test_files = filter(starts_and_ends_with("test_", ".jl"), readdir(test_path))
-@testset "ITensorUnicodePlots.jl" for file in test_files
- file_path = joinpath(test_path, file)
- println("Running test $(file_path)")
- include(file_path)
-end
diff --git a/ITensorUnicodePlots/test/test_basics.jl b/ITensorUnicodePlots/test/test_basics.jl
deleted file mode 100644
index 40d03008d2..0000000000
--- a/ITensorUnicodePlots/test/test_basics.jl
+++ /dev/null
@@ -1,60 +0,0 @@
-using ITensors
-using ITensorUnicodePlots
-using ReferenceTests
-using Test
-
-@testset "Basic test for UnicodePlots backend" begin
- extension = "txt"
-
- N = 10
- s(n) = Index([QN("Sz", 0) => 1, QN("Sz", 1) => 1]; tags="S=1/2,Site,n=$n")
- l(n) = Index([QN("Sz", 0) => 10, QN("Sz", 1) => 10]; tags="Link,l=$n")
- h(n) = Index([QN("Sz", 0) => 5, QN("Sz", 1) => 5]; tags="ham,Link,l=$n")
- s⃗ = [s(n) for n in 1:N]
- l⃗ = [l(n) for n in 1:(N - 1)]
- h⃗ = [h(n) for n in 1:(N - 1)]
-
- # Add some more indices between two of the tensors
- x = Index([QN("Sz", 0) => 2]; tags="X")
- y = Index([QN("Sz", 0) => 2]; tags="Y")
-
- n = 2
- ψn1n2 = randomITensor(l⃗[n - 1], s⃗[n], s⃗[n + 1], l⃗[n + 1], dag(x), dag(y))
- hn1 = randomITensor(dag(h⃗[n - 1]), s⃗[n]', dag(s⃗[n]), h⃗[n], x, y)
- hn2 = randomITensor(dag(h⃗[n]), s⃗[n + 1]', dag(s⃗[n + 1]), h⃗[n + 1])
- ELn0 = randomITensor(l⃗[n - 1]', h⃗[n - 1], dag(l⃗[n - 1]))
- ERn2 = randomITensor(l⃗[n + 1]', dag(h⃗[n + 1]), dag(l⃗[n + 1]))
-
- tn = [ELn0, ψn1n2, hn1, hn2, ERn2]
-
- R = @visualize ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R1 = @visualize ELn0 * ψn1n2 * hn1
- R2 = @visualize R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
- tn2 = @visualize tn
- T = @visualize ELn0
-
- @test R ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- @test R1 ≈ ELn0 * ψn1n2 * hn1
- @test R2 ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- @test all(tn .== tn2)
- @test T == ELn0
-
- R = @visualize figR ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R_tags = @visualize figR_tags ELn0 * ψn1n2 * hn1 * hn2 * ERn2 edge_labels = (tags=true,)
- R1 = @visualize figR1 ELn0 * ψn1n2 * hn1
- R2 = @visualize figR2 R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
- T = @visualize figT ELn0
-
- fig_tn = @visualize_noeval tn
-
- by = isequal
-
- @test_reference "references/R.$extension" figR by = by
- @test_reference "references/R_tags.$extension" figR_tags by = by
- @test_reference "references/R1.$extension" figR1 by = by
- @test_reference "references/R2.$extension" figR2 by = by
- @test_reference "references/tn.$extension" fig_tn by = by
- @test_reference "references/T.$extension" figT by = by
-
- @test_throws DimensionMismatch @visualize fig R1 * hn2 * ERn2 vertex_labels = ["T1", "T2"]
-end
diff --git a/ITensorUnicodePlots/test/test_examples.jl b/ITensorUnicodePlots/test/test_examples.jl
deleted file mode 100644
index 8335f08bdc..0000000000
--- a/ITensorUnicodePlots/test/test_examples.jl
+++ /dev/null
@@ -1,14 +0,0 @@
-using Test
-
-@testset "Examples" begin
- examples_path = joinpath(@__DIR__, "..", "examples")
- example_files = filter(starts_and_ends_with("ex_", ".jl"), readdir(examples_path))
- for file in example_files
- file_path = joinpath(examples_path, file)
- println("Testing file $(file_path)")
- empty!(ARGS)
- push!(ARGS, "false")
- @test !isnothing(include(file_path))
- empty!(ARGS)
- end
-end
diff --git a/ITensorVisualizationBase/.JuliaFormatter.toml b/ITensorVisualizationBase/.JuliaFormatter.toml
deleted file mode 100644
index 08f664cdb9..0000000000
--- a/ITensorVisualizationBase/.JuliaFormatter.toml
+++ /dev/null
@@ -1,2 +0,0 @@
-style = "blue"
-indent = 2
diff --git a/ITensorVisualizationBase/LICENSE b/ITensorVisualizationBase/LICENSE
deleted file mode 100644
index d53452f0d9..0000000000
--- a/ITensorVisualizationBase/LICENSE
+++ /dev/null
@@ -1,21 +0,0 @@
-MIT License
-
-Copyright (c) 2021 Matthew Fishman and contributors
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/ITensorVisualizationBase/NEWS.md b/ITensorVisualizationBase/NEWS.md
deleted file mode 100644
index 4426f5c097..0000000000
--- a/ITensorVisualizationBase/NEWS.md
+++ /dev/null
@@ -1,61 +0,0 @@
-This file is a (mostly) comprehensive list of changes made in each release of ITensorVisualizationBase.jl. For a completely comprehensive but more verbose list, see the [commit history on Github](https://github.com/ITensor/ITensors.jl/commits/main/ITensorVisualizationBase).
-
-While we are in v0.x of the package, we will follow the convention that updating from v0.x.y to v0.x.(y+1) (for example v0.1.15 to v0.1.16) should not break your code, unless you are using internal/undocumented features of the code, while updating from `v0.x.y` to `v0.(x+1).y` might break your code, though we will try to add deprecation warnings when possible, such as for simple cases where the name of a function changes.
-
-Note that as of Julia v1.5, in order to see deprecation warnings you will need to start Julia with `julia --depwarn=yes` (previously they were on by default). Please run your code like this before upgrading between minor versions of the code (for example from v0.1.41 to v0.2.0).
-
-After we release v1 of the package, we will start following [semantic versioning](https://semver.org).
-
-ITensors v0.1.5 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Bump to AbstractTrees 0.4 (#1031)
-
-ITensors v0.1.4 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Generalize edge labels for more general vertices (#907)
-
-ITensors v0.1.3 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Bump to ITensors 0.3 (#880)
-
-ITensors v0.1.2 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Remove subscript from single tensor visualization. Show plevs by default. (#841)
-
-ITensors v0.1.1 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Generalize `ITensorVisualizationBase.visualize` to make it easier to overload for new types (#802)
-
-ITensors v0.1.0 Release Notes
-=============================
-
-Bugs:
-
-Enhancements:
-
-- Register ITensorVisualizationBase package, code in ITensors.jl repository
diff --git a/ITensorVisualizationBase/Project.toml b/ITensorVisualizationBase/Project.toml
deleted file mode 100644
index f19d74c181..0000000000
--- a/ITensorVisualizationBase/Project.toml
+++ /dev/null
@@ -1,33 +0,0 @@
-name = "ITensorVisualizationBase"
-uuid = "cd2553d2-8bef-4d93-8a38-c62f17d5ad23"
-authors = ["Matthew Fishman and contributors"]
-version = "0.1.7"
-
-[deps]
-AbstractTrees = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
-Compat = "34da2185-b29b-5c13-b0c7-acf172513d20"
-GeometryBasics = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
-Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
-MetaGraphs = "626554b9-1ddb-594c-aa3c-2596fe9399a5"
-NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
-SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
-Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
-
-[compat]
-AbstractTrees = "0.4"
-Compat = "3.40.0, 4"
-GeometryBasics = "0.4.1"
-Graphs = "1.4.1"
-ITensors = "0.2.12, 0.3, 0.4, 0.5"
-MetaGraphs = "0.7.1"
-NetworkLayout = "0.4.3"
-Statistics = "1"
-julia = "1.6"
-
-[extras]
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
-
-[targets]
-test = ["Test"]
diff --git a/ITensorVisualizationBase/README.md b/ITensorVisualizationBase/README.md
deleted file mode 100644
index 06d97ae558..0000000000
--- a/ITensorVisualizationBase/README.md
+++ /dev/null
@@ -1,114 +0,0 @@
-# ITensorVisualizationBase
-
-This is an internal package providing common code for defining backends for visualizing tensor network of ITensors. It is only an interface package, and does not provide concrete implementations of visualizing tensor network code (by default, it does nothing). You will need to load a visualization backend, such as `ITensorUnicodePlots` or `ITensorGLMakie`. The main purpose is to use it with the [ITensors.jl](https://github.com/ITensor/ITensors.jl) package to view and debug tensor network contractions, for example:
-```julia
-using ITensors
-
-# Load a visualization backend, which will reexport the interface
-# of ITensorVisualizationBase automatically
-using ITensorUnicodePlots
-
-# ITensorVisualizationBase handles the logic of switching
-# between backends.
-@show ITensorVisualizationBase.get_backend()
-
-i = Index(2, "i")
-j = Index(10, "j")
-k = Index(40, "k")
-l = Index(40, "l")
-m = Index(40, "m")
-A = randomITensor(i, j, k)
-B = randomITensor(i, j, l, m)
-C = randomITensor(k, l)
-ABC = @visualize A * B * C
-```
-This will execute the contraction and output
-```julia
-ITensorVisualizationBase.get_backend() = ITensorVisualizationBase.Backend{:UnicodePlots}()⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀A⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⢱⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒⠒⠤⠤40⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤⠤C⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀2⊗10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔40⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀B⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀40⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
-```
-You can show the visualization with tags with:
-```julia
-ABC = @visualize A * B * C edge_labels=(tags=true,)
-```
-```julia
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀A⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⢱⠀⠀⠉⠉⠑⠒⠒⠤⠤⢄⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠑⠒(40|"k")⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠒⠒⠢⠤⠤⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠒⠒⠢⠤⠤C⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⢇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀(2|"i")⊗(10|"j")⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀(40|"l")⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀B⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀(40|"m")⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
- ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
-```
-
-In order to output a more sophisticated interactive visualization,
-you can load a Makie-based backend called GLMakie:
-```julia
-using ITensorGLMakie
-
-ABC = @visualize A * B * C edge_labels=(tags=true,);
-```
-A window like the following should appear:
-![alt text](assets/ITensorVisualization_A_B_C.png)
-
-You can switch back to another backend like the following:
-```julia
-julia> ITensorVisualizationBase.set_backend!("UnicodePlots");
-
-julia> ITensorVisualizationBase.get_backend()
-ITensorVisualizationBase.Backend{:UnicodePlots}()
-
-julia> ABC = @visualize A * B * C edge_labels=(tags=true,) # The visualization will now use the UnicodePlots backend
-[...]
-```
-The visualization makes an initial guess for the locations of the tensors (using [NetworkLayout.jl](https://github.com/JuliaGraphs/NetworkLayout.jl)), and then allows users to interactively move the tensors to better locations. You can move the tensors and external indices (the square and circle nodes of the network) by left clicking on a node and dragging it to a new location. You can also right click and drag to translate the entire diagram, and scroll to zoom in and out.
-
-In addition, you can visualize multiple steps of a contraction as follows:
-```julia
-julia> ITensorVisualizationBase.set_backend!("Makie");
-
-julia> ITensorVisualizationBase.get_backend()
-ITensorVisualizationBase.Backend{:Makie}()
-
-julia> AB = @visualize fig A * B edge_labels=(tags=true,);
-
-julia> ABC = @visualize! fig[1, 2] AB * C edge_labels=(tags=true,);
-
-julia> fig
-```
-![alt text](assets/ITensorVisualization_A_B_C_sequence.png)
diff --git a/ITensorVisualizationBase/assets/ITensorVisualization_A_B_C.png b/ITensorVisualizationBase/assets/ITensorVisualization_A_B_C.png
deleted file mode 100644
index c971d788c0..0000000000
Binary files a/ITensorVisualizationBase/assets/ITensorVisualization_A_B_C.png and /dev/null differ
diff --git a/ITensorVisualizationBase/assets/ITensorVisualization_A_B_C_sequence.png b/ITensorVisualizationBase/assets/ITensorVisualization_A_B_C_sequence.png
deleted file mode 100644
index 2c37cd1c23..0000000000
Binary files a/ITensorVisualizationBase/assets/ITensorVisualization_A_B_C_sequence.png and /dev/null differ
diff --git a/ITensorVisualizationBase/examples/ex_2d_tensor_network_layered.jl b/ITensorVisualizationBase/examples/ex_2d_tensor_network_layered.jl
deleted file mode 100644
index 8a652b8b43..0000000000
--- a/ITensorVisualizationBase/examples/ex_2d_tensor_network_layered.jl
+++ /dev/null
@@ -1,10 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using LayeredLayouts
-using Graphs
-
-tn = itensornetwork(grid((4, 4)); linkspaces=3)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true layout = layout
-
-fig
diff --git a/ITensorVisualizationBase/examples/ex_dmrg.jl b/ITensorVisualizationBase/examples/ex_dmrg.jl
deleted file mode 100644
index f91751d730..0000000000
--- a/ITensorVisualizationBase/examples/ex_dmrg.jl
+++ /dev/null
@@ -1,34 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-
-N = 10
-sites(n) = Index([QN("Sz", 0) => 1, QN("Sz", 1) => 1]; tags="S=1/2,Site,n=$n")
-l(n) = Index([QN("Sz", 0) => 10, QN("Sz", 1) => 10]; tags="Link,l=$n")
-h(n) = Index([QN("Sz", 0) => 5, QN("Sz", 1) => 5]; tags="ham,Link,l=$n")
-s⃗ = [sites(n) for n in 1:N]
-l⃗ = [l(n) for n in 1:(N - 1)]
-h⃗ = [h(n) for n in 1:(N - 1)]
-
-# Add some more indices between two of the tensors
-x = Index([QN("Sz", 0) => 2]; tags="X")
-y = Index([QN("Sz", 0) => 2]; tags="Y")
-
-n = 2
-ψn1n2 = randomITensor(l⃗[n - 1], s⃗[n], s⃗[n + 1], l⃗[n + 1], dag(x), dag(y))
-hn1 = randomITensor(dag(h⃗[n - 1]), s⃗[n]', dag(s⃗[n]), h⃗[n], x, y)
-hn2 = randomITensor(dag(h⃗[n]), s⃗[n + 1]', dag(s⃗[n + 1]), h⃗[n + 1])
-ELn0 = randomITensor(l⃗[n - 1]', h⃗[n - 1], dag(l⃗[n - 1]))
-ERn2 = randomITensor(l⃗[n + 1]', dag(h⃗[n + 1]), dag(l⃗[n + 1]))
-
-edge_labels = (; plevs=true)
-
-R = @visualize fig1 ELn0 * ψn1n2 * hn1 * hn2 * ERn2 edge_labels = edge_labels vertex_size =
- 50
-@show R ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
-
-# Split it up into multiple contractions
-R1 = @visualize fig2 ELn0 * ψn1n2 * hn1 edge_labels = edge_labels vertex_size = 50
-R2 = @visualize fig3 R1 * hn2 * ERn2 edge_labels = edge_labels vertex_size = 50
-@show R2 ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
-
-fig1, fig2, fig3
diff --git a/ITensorVisualizationBase/examples/ex_grid_layout.jl b/ITensorVisualizationBase/examples/ex_grid_layout.jl
deleted file mode 100644
index 9795c23484..0000000000
--- a/ITensorVisualizationBase/examples/ex_grid_layout.jl
+++ /dev/null
@@ -1,12 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using GeometryBasics
-using Graphs
-using NetworkLayout
-
-N = 10
-g = grid((N,))
-tn = itensornetwork(g; linkspaces=10, sitespaces=2)
-@visualize fig tn siteinds_direction = Point(1, -0.5) layout = SquareGrid(; cols=1) width =
- 20 height = 50
-fig
diff --git a/ITensorVisualizationBase/examples/ex_itensor_graph_makie.jl b/ITensorVisualizationBase/examples/ex_itensor_graph_makie.jl
deleted file mode 100644
index 802d0efa79..0000000000
--- a/ITensorVisualizationBase/examples/ex_itensor_graph_makie.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using Graphs
-
-g = grid((5,))
-tn = itensornetwork(g; linkspaces=10, sitespaces=2)
-@visualize fig tn
-
-fig
diff --git a/ITensorVisualizationBase/examples/ex_itensor_graph_unicode.jl b/ITensorVisualizationBase/examples/ex_itensor_graph_unicode.jl
deleted file mode 100644
index 802d0efa79..0000000000
--- a/ITensorVisualizationBase/examples/ex_itensor_graph_unicode.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using Graphs
-
-g = grid((5,))
-tn = itensornetwork(g; linkspaces=10, sitespaces=2)
-@visualize fig tn
-
-fig
diff --git a/ITensorVisualizationBase/examples/ex_qn_mps.jl b/ITensorVisualizationBase/examples/ex_qn_mps.jl
deleted file mode 100644
index 2aac387598..0000000000
--- a/ITensorVisualizationBase/examples/ex_qn_mps.jl
+++ /dev/null
@@ -1,13 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-
-s = siteinds("S=1/2", 5; conserve_qns=true)
-ψ = randomMPS(s, n -> isodd(n) ? "↑" : "↓"; linkdims=2)
-orthogonalize!(ψ, 2)
-ψdag = prime(linkinds, dag(ψ))
-tn = [ψ..., ψdag...]
-
-edge_labels = (; plevs=true, qns=true)
-@visualize fig tn edge_labels = edge_labels edge_textsize = 20
-
-fig
diff --git a/ITensorVisualizationBase/examples/ex_quantum_circuit.jl b/ITensorVisualizationBase/examples/ex_quantum_circuit.jl
deleted file mode 100644
index 7a48cf8efa..0000000000
--- a/ITensorVisualizationBase/examples/ex_quantum_circuit.jl
+++ /dev/null
@@ -1,32 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using LayeredLayouts
-using Graphs
-
-N = 10
-layers = 10
-ndelete = 0
-
-s = siteinds("Qubit", N)
-layer(N, start) = [("CX", i, i + 1) for i in start:2:(N - 1)]
-layer(N) = append!(layer(N, 1), layer(N, 2))
-layer_N = layer(N)
-gates = []
-for _ in 1:layers
- append!(gates, layer_N)
-end
-
-for _ in 1:ndelete
- deleteat!(gates, rand(eachindex(gates)))
-end
-
-U, s̃ = circuit_network(gates, s)
-ψ = prod(MPS(s))
-ψ̃ = prod(MPS(s̃))
-tn = [ψ, U..., ψ̃]
-
-edge_labels = (; plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels layout = layout
-
-fig
diff --git a/ITensorVisualizationBase/examples/ex_visualize_3d.jl b/ITensorVisualizationBase/examples/ex_visualize_3d.jl
deleted file mode 100644
index de6104927f..0000000000
--- a/ITensorVisualizationBase/examples/ex_visualize_3d.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using Graphs
-
-tn = itensornetwork(grid((3, 3, 3)))
-edge_labels = (; dims=false)
-@visualize fig tn ndims = 3 edge_labels = edge_labels vertex_size = 400
-
-fig
diff --git a/ITensorVisualizationBase/examples/notest_ex_2d_circuit.jl b/ITensorVisualizationBase/examples/notest_ex_2d_circuit.jl
deleted file mode 100644
index fe80995222..0000000000
--- a/ITensorVisualizationBase/examples/notest_ex_2d_circuit.jl
+++ /dev/null
@@ -1,26 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using Graphs
-using LayeredLayouts
-using PastaQ: randomcircuit
-
-Nx, Ny = 3, 3
-N = Nx * Ny
-# TODO: change to (Nx, Ny) with PastaQ v0.0.16
-gates = randomcircuit(
- Nx, Ny; depth=4, twoqubitgates="CX", onequbitgates="Rn", layered=false, rotated=false
-)
-
-s = siteinds("Qubit", N)
-
-U, s̃ = circuit_network(gates, s)
-ψ = MPS(s)
-ψ̃ = MPS(s̃)
-tn = [prod(ψ), U..., prod(ψ̃)]
-
-edge_labels = (; plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels layout = layout edge_textsize =
- 20
-
-fig
diff --git a/ITensorVisualizationBase/examples/notest_ex_qft_circuit.jl b/ITensorVisualizationBase/examples/notest_ex_qft_circuit.jl
deleted file mode 100644
index f915c7bba8..0000000000
--- a/ITensorVisualizationBase/examples/notest_ex_qft_circuit.jl
+++ /dev/null
@@ -1,22 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using Graphs
-using LayeredLayouts
-using PastaQ: qft
-
-N = 4
-gates = qft(N)
-
-s = siteinds("Qubit", N)
-
-U, s̃ = circuit_network(gates, s)
-ψ = MPS(s)
-ψ̃ = MPS(s̃)
-tn = [ψ..., U..., ψ̃...]
-
-edge_labels = (; tags=true, plevs=true)
-layout(g) = layered_layout(solve_positions(Zarate(), g))
-@visualize fig tn arrow_show = true edge_labels = edge_labels edge_textsize = 20 layout =
- layout
-
-fig
diff --git a/ITensorVisualizationBase/src/ITensorVisualizationBase.jl b/ITensorVisualizationBase/src/ITensorVisualizationBase.jl
deleted file mode 100644
index 46c532544b..0000000000
--- a/ITensorVisualizationBase/src/ITensorVisualizationBase.jl
+++ /dev/null
@@ -1,64 +0,0 @@
-module ITensorVisualizationBase
-
-using AbstractTrees
-using Compat
-using GeometryBasics
-using Graphs
-using ITensors
-using ITensors.ITensorVisualizationCore
-using LinearAlgebra
-using MetaGraphs
-using NetworkLayout
-using SparseArrays
-using Statistics
-
-# Avoid conflict between `Graphs.contract` and `ITensors.contract`
-using Graphs:
- Graphs,
- AbstractEdge,
- AbstractGraph,
- SimpleGraph,
- SimpleDiGraph,
- add_edge!,
- add_vertex!,
- all_neighbors,
- dst,
- edges,
- ne,
- neighbors,
- nv,
- src,
- vertices
-
-using ITensors: data, QNIndex
-
-import ITensors.ITensorVisualizationCore: visualize, visualize!, visualize_sequence
-
-export @visualize,
- @visualize!,
- @visualize_noeval,
- @visualize_noeval!,
- @visualize_sequence,
- @visualize_sequence_noeval,
- circuit_network,
- itensornetwork,
- layered_layout,
- IndexLabels
-
-# Some general graph functionality
-include("graphs.jl")
-
-# Some general layout functionality
-include("layered_layout.jl")
-
-# Backends interface
-include("backends_interface.jl")
-include("defaults.jl")
-
-# Conversion betweens graphs and ITensor networks
-include("itensor_graph.jl")
-
-# Visualizing ITensor networks
-include("visualize.jl")
-
-end
diff --git a/ITensorVisualizationBase/src/backends_interface.jl b/ITensorVisualizationBase/src/backends_interface.jl
deleted file mode 100644
index 9d2476d45a..0000000000
--- a/ITensorVisualizationBase/src/backends_interface.jl
+++ /dev/null
@@ -1,42 +0,0 @@
-struct Backend{backend} end
-
-Backend(b::Backend) = b
-Backend(s::AbstractString) = Backend{Symbol(s)}()
-Backend(s::Symbol) = Backend{s}()
-Backend(s::Nothing) = Backend{s}()
-backend(::Backend{N}) where {N} = N
-Backend() = Backend{Symbol()}()
-
-macro Backend_str(s)
- return Backend{Symbol(s)}
-end
-
-const current_backend = Ref{Union{Nothing,Backend}}(nothing)
-
-visualize(::Backend{nothing}, args...; kwargs...) = nothing
-
-set_backend!(::Nothing) = (current_backend[] = nothing)
-function set_backend!(backend::Backend)
- original_backend = current_backend[]
- current_backend[] = backend
- return original_backend
-end
-set_backend!(backend::Union{Symbol,String}) = set_backend!(Backend(backend))
-
-get_backend() = isnothing(current_backend[]) ? default_backend() : current_backend[]
-
-function plot(::Backend{T}, args...; kwargs...) where {T}
- return error("plot not implemented for backend type $T.")
-end
-function draw_edge!(::Backend{T}, args...; kwargs...) where {T}
- return error("draw_edge! not implemented for backend type $T.")
-end
-function annotate!(::Backend{T}, args...; kwargs...) where {T}
- return error("annotate! not implemented for backend type $T.")
-end
-
-function translate_color(::Backend{T}, color) where {T}
- return error("translate_color not implemented for backend type $T and color $color")
-end
-
-point_to_line(v1, v2) = ([v1[1], v2[1]], [v1[2], v2[2]])
diff --git a/ITensorVisualizationBase/src/defaults.jl b/ITensorVisualizationBase/src/defaults.jl
deleted file mode 100644
index a43d72b59a..0000000000
--- a/ITensorVisualizationBase/src/defaults.jl
+++ /dev/null
@@ -1,247 +0,0 @@
-#############################################################################
-# backend
-#
-
-default_backend() = Backend(nothing)
-
-#############################################################################
-# vertex labels
-#
-
-function subscript_char(n::Integer)
- @assert 0 ≤ n ≤ 9
- return Char(0x2080 + n)
-end
-
-function subscript(n::Integer)
- ss = prod(Iterators.reverse((subscript_char(d) for d in digits(abs(n)))))
- if n < 0
- ss = "₋" * ss
- end
- return ss
-end
-
-subscript(n) = string(n)
-
-default_vertex_labels_prefix(b::Backend, g) = "T"
-function default_vertex_labels(
- b::Backend, g::AbstractGraph, vertex_labels_prefix=default_vertex_labels_prefix(b)
-)
- return [string(vertex_labels_prefix, subscript(v)) for v in vertices(g)]
-end
-
-default_vertex_size(b::Backend, g) = 60
-default_vertex_textsize(b::Backend, g) = 20
-
-# TODO: customizable vertex marker
-# nodeshapes="●", # ●, ▶, ◀, ■, █, ◩, ◪, ⧄, ⧅, ⦸, ⊘, ⬔, ⬕, ⬛, ⬤, 🔲, 🔳, 🔴, 🔵, ⚫
-# edgeshapes="—", # ⇵, ⇶, ⇄, ⇅, ⇆, ⇇, ⇈, ⇉, ⇊, ⬱, —, –, ⟵, ⟶, ➖, −, ➡, ⬅, ⬆, ⬇
-
-#############################################################################
-# edge labels
-#
-
-default_edge_textsize(b::Backend) = 30
-
-function default_edge_labels(b::Backend, g::AbstractGraph)
- return fill("", ne(g))
-end
-
-function default_edge_labels(b::Backend, g::AbstractMetaGraph)
- return IndexLabels(b)
-end
-
-default_dims(b::Backend) = true
-default_tags(b::Backend) = false
-default_ids(b::Backend) = false
-default_plevs(b::Backend) = true
-default_qns(b::Backend) = false
-default_newlines(b::Backend) = true
-
-abstract type AbstractEdgeLabels end
-
-(l::AbstractEdgeLabels)(g::AbstractGraph) = edge_labels(l, g)
-
-struct IndexLabels <: AbstractEdgeLabels
- dims::Bool
- tags::Bool
- ids::Bool
- plevs::Bool
- qns::Bool
- newlines::Bool
-end
-
-IndexLabels(; kwargs...) = IndexLabels(Backend(); kwargs...)
-IndexLabels(backend; kwargs...) = IndexLabels(Backend(backend); kwargs...)
-
-function IndexLabels(
- b::Backend;
- dims=default_dims(b),
- tags=default_tags(b),
- ids=default_ids(b),
- plevs=default_plevs(b),
- qns=default_qns(b),
- newlines=default_newlines(b),
-)
- return IndexLabels(dims, tags, ids, plevs, qns, newlines)
-end
-
-edge_labels(b::Backend, l::Vector{String}, g::AbstractGraph) = l
-
-function edge_labels(b::Backend, l::IndexLabels, g::AbstractGraph)
- return edge_labels(l, g)
-end
-
-function edge_labels(l::IndexLabels, g::AbstractGraph)
- return String[edge_label(l, g, e) for e in edges(g)]
-end
-
-function edge_labels(b::Backend, params::NamedTuple, g::AbstractGraph)
- return IndexLabels(b; params...)(g)
-end
-
-function edge_label(l::IndexLabels, g::AbstractMetaGraph, e)
- indsₑ = get_prop(g, e, :inds)
- return label_string(
- indsₑ;
- is_self_loop=is_self_loop(e),
- dims=l.dims,
- tags=l.tags,
- ids=l.ids,
- plevs=l.plevs,
- qns=l.qns,
- newlines=l.newlines,
- )
-end
-
-function _edge_label(l, g::AbstractGraph, e)
- return string(e)
-end
-
-edge_label(l::IndexLabels, g::AbstractGraph, e) = _edge_label(l, g, e)
-edge_label(l, g::AbstractGraph, e) = _edge_label(l, g, e)
-
-#function default_edge_labels(b::Backend, g; kwargs...)
-# return [edge_label(g, e; kwargs...) for e in edges(g)]
-#end
-
-plevstring(i::Index) = ITensors.primestring(plev(i))
-idstring(i::Index) = string(id(i) % 1000)
-tagsstring(i::Index) = string(tags(i))
-qnstring(i::Index) = ""
-function qnstring(i::QNIndex)
- str = "["
- for (n, qnblock) in pairs(space(i))
- str *= "$qnblock"
- if n ≠ lastindex(space(i))
- str *= ", "
- end
- end
- str *= "]"
- if dir(i) == ITensors.In
- str *= "†"
- end
- return str
-end
-
-function label_string(i::Index; dims, tags, plevs, ids, qns)
- showing_plev = plevs && (plev(i) > 0)
-
- str = ""
- if any((tags, showing_plev, ids, qns))
- str *= "("
- end
- if dims
- str *= string(dim(i))
- end
- if ids
- if dims
- str *= "|"
- end
- str *= idstring(i)
- end
- if tags
- if any((dims, ids))
- str *= "|"
- end
- str *= tagsstring(i)
- end
- if any((tags, showing_plev, ids, qns))
- str *= ")"
- end
- if plevs
- str *= plevstring(i)
- end
- if qns
- str *= qnstring(i)
- end
- return str
-end
-
-function label_string(is; is_self_loop=false, dims, tags, plevs, ids, qns, newlines)
- str = ""
- for n in eachindex(is)
- str *= label_string(is[n]; dims=dims, tags=tags, plevs=plevs, ids=ids, qns=qns)
- if n ≠ lastindex(is)
- if any((dims, tags, ids, qns))
- str *= "⊗"
- end
- if newlines && any((tags, ids, qns))
- str *= "\n"
- end
- end
- end
- return str
-end
-
-#############################################################################
-# edge width
-#
-
-function width(inds)
- return log2(dim(inds)) + 1
-end
-
-function default_edge_widths(b::Backend, g::AbstractMetaGraph)
- return Float64[width(get_prop(g, e, :inds)) for e in edges(g)]
-end
-
-function default_edge_widths(b::Backend, g::AbstractGraph)
- return fill(one(Float64), ne(g))
-end
-
-#############################################################################
-# arrow
-#
-
-default_arrow_size(b::Backend, g) = 30
-
-_hasqns(tn::Vector{ITensor}) = any(hasqns, tn)
-
-function _hasqns(g::AbstractMetaGraph)
- if iszero(ne(g))
- if has_prop(g, first(vertices(g)), :inds)
- return hasqns(get_prop(g, first(vertices(g)), :inds))
- else
- return hasqns(())
- end
- end
- return hasqns(get_prop(g, first(edges(g)), :inds))
-end
-
-_hasqns(g::AbstractGraph) = false
-
-default_arrow_show(b::Backend, g) = _hasqns(g)
-
-#############################################################################
-# self-loop/siteinds direction
-#
-
-default_siteinds_direction(b::Backend, g) = Point2(0, -1)
-
-#############################################################################
-# dimensions
-#
-
-_ndims(::Any) = 2
-_ndims(::NetworkLayout.AbstractLayout{N}) where {N} = N
diff --git a/ITensorVisualizationBase/src/experimental/spanning_trees/mst.jl b/ITensorVisualizationBase/src/experimental/spanning_trees/mst.jl
deleted file mode 100644
index fa3e278c47..0000000000
--- a/ITensorVisualizationBase/src/experimental/spanning_trees/mst.jl
+++ /dev/null
@@ -1,71 +0,0 @@
-using Graphs, SimpleWeightedGraphs
-using GraphRecipes, Plots
-
-#spanning_tree_method = "mst"
-spanning_tree_method = "bfs"
-
-nx, ny = 7, 7
-n = nx * ny
-g = Graphs.grid((nx, ny); periodic=false)
-
-nx_middle = nx ÷ 2 + 1
-ny_middle = ny ÷ 2 + 1
-n_middle = LinearIndices((nx, ny))[nx_middle, ny_middle]
-
-vert_names = string.(vertices(g))
-dist = gdistances(g, n_middle)
-#names = string.(vert_names, ", ", dist)
-names = dist
-
-wg = SimpleWeightedGraph(nv(g))
-
-for e in edges(g)
- dw = mean([dist[src(e)], dist[dst(e)]])
- w = 1 / dw^2 + eps() * randn()
- add_edge!(wg, src(e), dst(e), w)
-end
-
-g_st = if spanning_tree_method == "mst"
- # mst_function = boruvka_mst
- mst_function = kruskal_mst
- mst_weight = mst_function(wg; minimize=false)
- mst = mst_function == boruvka_mst ? mst_weight.mst : mst_weight
- g_mst = SimpleWeightedGraph(nv(wg))
- for ew in mst
- add_edge!(g_mst, src(ew), dst(ew), weight(ew))
- end
- g_mst
-elseif spanning_tree_method == "bfs"
- # Weights are set to 1
- SimpleWeightedGraph(dfs_tree(wg, n_middle))
-end
-
-edgelabel_dict = Dict{Tuple{Int,Int},String}()
-for ew in edges(wg)
- edgelabel_dict[(src(ew), dst(ew))] = string(round(weight(ew); digits=2))
-end
-
-edgecolor_dict = Dict()
-for ew in edges(wg)
- color = ew ∈ edges(g_st) ? :black : :red
- edgecolor_dict[(src(ew), dst(ew))] = color
-end
-
-edgelabel_dict_mst = Dict()
-for i in vertices(g_st), j in vertices(g_st)
- edgelabel_dict_mst[(i, j)] = string(round(get_weight(g_st, i, j); digits=2))
-end
-
-plt = graphplot(
- wg;
- markersize=0.3,
- names=names,
- edgelabel=edgelabel_dict,
- curves=false,
- edgecolor=edgecolor_dict,
- linewidth=20,
- fontsize=20,
- size=(3000, 3000),
-)
-
-plt
diff --git a/ITensorVisualizationBase/src/experimental/spanning_trees/shortest_path_tree.jl b/ITensorVisualizationBase/src/experimental/spanning_trees/shortest_path_tree.jl
deleted file mode 100644
index 0cfb3a7f86..0000000000
--- a/ITensorVisualizationBase/src/experimental/spanning_trees/shortest_path_tree.jl
+++ /dev/null
@@ -1,35 +0,0 @@
-using Graphs
-using Random
-
-Random.seed!(1234)
-
-function dijkstra_spt(g, v_src_initial)
- out = dijkstra_shortest_paths(g, [v_src_initial]; allpaths=true, trackvertices=true)
- paths = out.predecessors
- edges = Edge{Int}[]
- for v_dst_final in eachindex(paths)
- v_src = v_src_initial
- p = paths[v_dst_final]
- @show v_src_initial, p, v_dst_final
- for v_src in p
- push!(edges, Edge(v_src => v_dst_final))
- end
- end
- return edges
-end
-
-g = Graph(6, 10)
-
-@show collect(edges(g))
-
-t_mst = kruskal_mst(g)
-
-@show t_mst
-
-t_bfs = bfs_tree(g, 1)
-
-@show collect(edges(t_bfs))
-
-t_spt = dijkstra_spt(g, 1)
-
-@show t_spt
diff --git a/ITensorVisualizationBase/src/experimental/spanning_trees/spanning_tree.jl b/ITensorVisualizationBase/src/experimental/spanning_trees/spanning_tree.jl
deleted file mode 100644
index 1efc9716ce..0000000000
--- a/ITensorVisualizationBase/src/experimental/spanning_trees/spanning_tree.jl
+++ /dev/null
@@ -1,71 +0,0 @@
-using Graphs, SimpleWeightedGraphs
-using GraphRecipes, Plots
-
-#spanning_tree_method = "mst"
-spanning_tree_method = "bfs"
-
-nx, ny = 7, 7
-n = nx * ny
-g = Graphs.grid((nx, ny); periodic=false)
-
-nx_middle = 1 #nx ÷ 2 + 1
-ny_middle = 1 #ny ÷ 2 + 1
-n_middle = LinearIndices((nx, ny))[nx_middle, ny_middle]
-
-vert_names = string.(vertices(g))
-dist = gdistances(g, n_middle)
-#names = string.(vert_names, ", ", dist)
-names = dist
-
-wg = SimpleWeightedGraph(nv(g))
-
-for e in edges(g)
- dw = mean([dist[src(e)], dist[dst(e)]])
- w = 1 / dw^2 + eps() * randn()
- add_edge!(wg, src(e), dst(e), w)
-end
-
-g_st = if spanning_tree_method == "mst"
- # mst_function = boruvka_mst
- mst_function = kruskal_mst
- mst_weight = mst_function(wg; minimize=false)
- mst = mst_function == boruvka_mst ? mst_weight.mst : mst_weight
- g_mst = SimpleWeightedGraph(nv(wg))
- for ew in mst
- add_edge!(g_mst, src(ew), dst(ew), weight(ew))
- end
- g_mst
-elseif spanning_tree_method == "bfs"
- # Weights are set to 1
- SimpleWeightedGraph(bfs_tree(wg, n_middle))
-end
-
-edgelabel_dict = Dict{Tuple{Int,Int},String}()
-for ew in edges(wg)
- edgelabel_dict[(src(ew), dst(ew))] = string(round(weight(ew); digits=2))
-end
-
-edgecolor_dict = Dict()
-for ew in edges(wg)
- color = ew ∈ edges(g_st) ? :black : :red
- edgecolor_dict[(src(ew), dst(ew))] = color
-end
-
-edgelabel_dict_mst = Dict()
-for i in vertices(g_st), j in vertices(g_st)
- edgelabel_dict_mst[(i, j)] = string(round(get_weight(g_st, i, j); digits=2))
-end
-
-plt = graphplot(
- wg;
- markersize=0.3,
- names=names,
- edgelabel=edgelabel_dict,
- curves=false,
- edgecolor=edgecolor_dict,
- linewidth=20,
- fontsize=20,
- size=(3000, 3000),
-)
-
-plt
diff --git a/ITensorVisualizationBase/src/graphs.jl b/ITensorVisualizationBase/src/graphs.jl
deleted file mode 100644
index 84461d4d98..0000000000
--- a/ITensorVisualizationBase/src/graphs.jl
+++ /dev/null
@@ -1,11 +0,0 @@
-"""
- Grid
-
-Gride layout.
-"""
-struct Grid end
-
-(::Grid)(g) = Point.(5 .* (vertices(g) .- 1), 0)
-
-is_self_loop(e::AbstractEdge) = src(e) == dst(e)
-any_self_loops(g::AbstractGraph) = any(is_self_loop, edges(g))
diff --git a/ITensorVisualizationBase/src/itensor_graph.jl b/ITensorVisualizationBase/src/itensor_graph.jl
deleted file mode 100644
index 6a1df81424..0000000000
--- a/ITensorVisualizationBase/src/itensor_graph.jl
+++ /dev/null
@@ -1,113 +0,0 @@
-#
-# Conversion between Graphs and ITensor networks
-#
-
-hasuniqueinds(args...; kwargs...) = !isempty(uniqueinds(args...; kwargs...))
-
-function graph_dir(inds)
- dirs = dir.(inds)
- if length(dirs) == 1
- return only(dirs)
- end
- if all(==(dirs[1]), dirs)
- return dirs[1]
- end
- return ITensors.Out
-end
-
-# TODO: rename graph, dispatch on QNs to DiGraph
-function Graphs.SimpleDiGraph(tn::Vector{ITensor})
- nv = length(tn)
- g = SimpleDiGraph(nv)
- for v1 in 1:nv, v2 in (v1 + 1):nv
- indsᵛ¹ᵛ² = commoninds(tn[v1], tn[v2])
- if !isempty(commoninds(tn[v1], tn[v2]))
- e = v1 => v2
- if graph_dir(indsᵛ¹ᵛ²) == ITensors.In
- e = reverse(e)
- end
- add_edge!(g, e)
- end
- end
- for v in vertices(g)
- if hasuniqueinds(tn[v], tn[all_neighbors(g, v)]...)
- # Add a self-loop
- add_edge!(g, v => v)
- end
- end
- return g
-end
-
-# TODO: rename indsgraph, dispatch on QNs to DiGraph
-function MetaGraphs.MetaDiGraph(tn::Vector{ITensor})
- sg = SimpleDiGraph(tn)
- mg = MetaDiGraph(sg)
- for e in edges(mg)
- indsₑ = if is_self_loop(e)
- v = src(e)
- # For self edges, the vertex itself is included as
- # a neighbor so we must exclude it.
- uniqueinds(tn[v], tn[setdiff(all_neighbors(mg, v), v)]...)
- else
- commoninds(tn[src(e)], tn[dst(e)])
- end
- set_prop!(mg, e, :inds, indsₑ)
- end
- return mg
-end
-
-default_linkspaces() = 1
-default_sitespaces() = 1
-
-default(x, x_default) = x
-default(x::Nothing, x_default) = x_default
-
-function itensornetwork(
- g::AbstractGraph; linkspaces=default_linkspaces(), sitespaces=nothing
-)
- N = nv(g)
- if !isnothing(sitespaces) && !any_self_loops(g)
- g = copy(g)
- for v in vertices(g)
- add_edge!(g, v => v)
- end
- end
- sitespaces = default(sitespaces, default_sitespaces())
- # TODO: Specialize to Index{typeof(linkspaces)}
- inds_network = [Index[] for _ in 1:N]
- for e in edges(g)
- if !is_self_loop(e)
- lₑ = Index(linkspaces; tags="l=$(src(e))↔$(dst(e))")
- push!(inds_network[src(e)], lₑ)
- push!(inds_network[dst(e)], dag(lₑ))
- else
- sₑ = Index(sitespaces; tags="s=$(src(e))")
- push!(inds_network[src(e)], sₑ)
- end
- end
- tn = Vector{ITensor}(undef, N)
- for n in 1:N
- tn[n] = ITensor(inds_network[n])
- end
- return tn
-end
-
-sites(g::Tuple{String,<:Tuple}) = g[2]
-sites(g::Tuple{String,<:Tuple,<:NamedTuple}) = g[2]
-sites(g::Tuple{String,Int}) = g[2]
-sites(g::Tuple{String,Vararg{Int}}) = Base.tail(g)
-sites(g::Tuple{String,Int,<:NamedTuple}) = g[2]
-
-# Functionality for turning a list of gates into an ITensor
-# network.
-function circuit_network(gates, s::Vector{<:Index})
- s = copy(s)
- U = ITensor[]
- for g in gates
- push!(U, op(g, s))
- for n in sites(g)
- s[n] = s[n]'
- end
- end
- return U, s
-end
diff --git a/ITensorVisualizationBase/src/layered_layout.jl b/ITensorVisualizationBase/src/layered_layout.jl
deleted file mode 100644
index 05bb134fee..0000000000
--- a/ITensorVisualizationBase/src/layered_layout.jl
+++ /dev/null
@@ -1,9 +0,0 @@
-# Use like this:
-#
-# using LayeredLayouts
-# layout(g) = layered_layout(solve_positions(Zarate(), g))
-#
-function layered_layout(pos)
- xs, ys, _ = pos
- return Point.(zip(xs, ys))
-end
diff --git a/ITensorVisualizationBase/src/visualize.jl b/ITensorVisualizationBase/src/visualize.jl
deleted file mode 100644
index 4d0838290b..0000000000
--- a/ITensorVisualizationBase/src/visualize.jl
+++ /dev/null
@@ -1,245 +0,0 @@
-#
-# Contraction sequence
-#
-
-# Tools for contracting a network with a sequence
-function _contract(label1::String, label2::String)
- return string("(", label1, "*", label2, ")")
-end
-
-function _contract(tensor1::ITensor, tensor2::ITensor)
- indsR = noncommoninds(tensor1, tensor2)
- return isempty(indsR) ? ITensor() : ITensor(indsR)
-end
-
-sequence_traversal(sequence) = reverse(collect(StatelessBFS(sequence)))
-
-function contract_dict(tensors, sequence, traversal=sequence_traversal(sequence))
- net_tensors = Dict()
- traversal = reverse(collect(StatelessBFS(sequence)))
- for net in traversal
- if net isa Int
- net_tensors[net] = tensors[net]
- else # net isa Vector
- net_tensors[net] = _contract(net_tensors[net[1]], net_tensors[net[2]])
- end
- end
- return net_tensors
-end
-
-# Return all of the contractions involved in the sequence.
-function contraction_sequence(
- tensors,
- sequence,
- traversal=sequence_traversal(sequence),
- contract_dict=contract_dict(tensors, sequence, traversal),
-)
- all_tensors = Any[]
- tensors_1 = Vector{Union{Nothing,eltype(tensors)}}(tensors)
- net_position = Dict()
- N = length(tensors)
- n = N + 1
- for net in traversal
- if net isa Int
- net_position[net] = net
- else
- net_position[net] = n
- n += 1
- end
- if !isa(net, Int)
- for n in net
- tensors_1[net_position[n]] = nothing
- end
- push!(tensors_1, contract_dict[net])
- push!(all_tensors, copy(tensors_1))
- end
- end
- return convert.(Vector{eltype(tensors)}, filter.(!isnothing, all_tensors))
-end
-
-#
-# Convert a tree to a graph
-#
-
-struct Tree
- x::Any
-end
-function Base.getindex(tree::Tree, indices)
- node = tree.x
- for idx in indices
- node = children(node)[idx]
- end
- return node
-end
-
-tree_to_graph(tr) = tree_to_graph(Tree(tr))
-
-function tree_to_graph(tr::Tree)
- g = SimpleDiGraph()
- labels = Any[]
- walk_tree!(g, labels, tr)
- return (g, labels)
-end
-
-function walk_tree!(g, labels, tr::Tree)
- add_vertex!(g)
- top_vertex = vertices(g)[end]
- push!(labels, tr.x)
- for i in 1:length(tr.x)
- if isa(tr[i], Vector)
- child = walk_tree!(g, labels, Tree(tr[i]))
- add_edge!(g, child, top_vertex)
- else
- add_vertex!(g)
- n = vertices(g)[end]
- add_edge!(g, n, top_vertex)
- push!(labels, tr[i])
- end
- end
- return top_vertex
-end
-
-# Visualization function interface. Ultimately calls a beckend.
-
-function visualize(g::AbstractGraph, sequence=nothing; backend=get_backend(), kwargs...)
- # TODO: do something with the sequence (show sequence, add labels indicating sequence, etc.)
- return visualize(Backend(backend), g; kwargs...)
-end
-
-function visualize(tn::Vector{ITensor}, sequence=nothing; kwargs...)
- return visualize(MetaDiGraph(tn), sequence; kwargs...)
-end
-
-function visualize(tn::Tuple{Vector{ITensor}}, args...; kwargs...)
- return visualize(only(tn), args...; kwargs...)
-end
-visualize(ψ::MPS, args...; kwargs...) = visualize(data(ψ), args...; kwargs...)
-function visualize(tn::Tuple{ITensor,Vararg{ITensor}}, args...; kwargs...)
- return visualize(collect(tn), args...; kwargs...)
-end
-function visualize(t1::ITensor, tn_tail::ITensor...; kwargs...)
- return visualize([t1, tn_tail...]; kwargs...)
-end
-
-# Special case single ITensor
-function visualize(t::ITensor, sequence=nothing; vertex_labels_prefix, kwargs...)
- tn = [t]
- vertex_labels = [vertex_labels_prefix]
- return visualize(MetaDiGraph(tn), sequence; vertex_labels=vertex_labels, kwargs...)
-end
-
-# Special case single ITensor
-function visualize(tn::Tuple{ITensor}, args...; kwargs...)
- return visualize(only(tn), args...; kwargs...)
-end
-
-function visualize!(fig, g::AbstractGraph; backend=get_backend(), kwargs...)
- return visualize!(Backend(backend), fig, g; kwargs...)
-end
-
-function visualize!(fig, tn::Vector{ITensor}, sequence=nothing; kwargs...)
- return visualize!(fig, MetaDiGraph(tn); kwargs...)
-end
-visualize!(fig, ψ::MPS, sequence=nothing; kwargs...) = visualize!(fig, data(ψ); kwargs...)
-function visualize!(fig, tn::Tuple{Vararg{ITensor}}, sequence=nothing; kwargs...)
- return visualize!(fig, collect(tn); kwargs...)
-end
-visualize!(fig, tn::ITensor...; kwargs...) = visualize!(fig, collect(tn); kwargs...)
-
-function visualize!(fig, tn::Tuple{Vector{ITensor}}, sequence=nothing; kwargs...)
- return visualize!(fig, tn[1], sequence; kwargs...)
-end
-function visualize!(
- fig, f::Function, tn::Tuple{Vararg{ITensor}}, sequence=nothing; kwargs...
-)
- return visualize!(fig, tn, sequence; kwargs...)
-end
-
-# Macro outputs a 1-tuple of the function arguments
-function visualize(f::Union{Function,Type}, tn::Tuple{T}, sequence; kwargs...) where {T}
- # TODO: specialize on the function type. Also accept a general collection.
- return visualize(only(tn), sequence; kwargs...)
-end
-
-# Macro outputs a tuple of ITensors to visualize
-function visualize(f::Union{Function,Type}, tn::Tuple{Vararg{ITensor}}, sequence; kwargs...)
- # TODO: specialize on the function type. Also accept a general collection.
- return visualize(tn, sequence; kwargs...)
-end
-
-function visualize!(fig, f::Union{Function,Type}, As...; kwargs...)
- # TODO: specialize of the function type. Also accept a general collection.
- return visualize!(fig, As...; kwargs...)
-end
-
-function _visualize_sequence!(fig, tn, sequence, n; kwargs...)
- return error("Not implemented")
-end
-
-function sequence_labels(sequence, all_sequences, vertex_labels)
- traversal = sequence_traversal(sequence)
- labels_dict = contract_dict(vertex_labels, sequence, traversal)
- all_labels = [labels_dict[s] for s in all_sequences]
- return all_labels
-end
-
-function _graphplot(backend::Backend, graph; all_labels)
- return error("Not implemented for backend $backend.")
-end
-
-function visualize_sequence(sequence, vertex_labels)
- graph, all_sequences = tree_to_graph(sequence)
- all_labels = sequence_labels(sequence, all_sequences, vertex_labels)
- fig = _graphplot(Backend"Makie"(), graph; all_labels=all_labels)
- return fig
-end
-
-function default_sequence(tn::Vector{ITensor})
- N = length(tn)
- return foldl((x, y) -> [x, y], 1:N)
-end
-
-function visualize_sequence(
- f::Union{Function,Type}, tn::Vector{ITensor}, sequence::Nothing; kwargs...
-)
- return visualize_sequence(f, tn, default_sequence(tn); kwargs...)
-end
-
-function visualize_sequence(
- f::Union{Function,Type}, tn::Vector{ITensor}, sequence=default_sequence(tn); kwargs...
-)
- N = length(tn)
-
- # TODO: clean this up a bit
- vertex_labels_prefix = get(
- kwargs,
- :vertex_labels_prefix,
- default_vertex_labels_prefix(Backend("Makie"), MetaDiGraph(tn)),
- )
- vertex_labels = get(
- kwargs,
- :vertex_labels,
- default_vertex_labels(Backend(""), MetaDiGraph(tn), vertex_labels_prefix),
- )
-
- fig = visualize_sequence(sequence, vertex_labels)
-
- visualize!(fig[1, 2], tn; vertex_labels=vertex_labels, kwargs...)
-
- traversal = sequence_traversal(sequence)
- labels_sequence = contraction_sequence(vertex_labels, sequence, traversal)
-
- tn_sequence = contraction_sequence(tn, sequence, traversal)
-
- for n in 1:length(tn_sequence)
- visualize!(fig[1, n + 2], tn_sequence[n]; vertex_labels=labels_sequence[n], kwargs...)
- end
-
- return fig
-end
-
-function visualize_sequence(
- f::Union{Function,Type}, tn::Tuple{Vector{ITensor}}, sequence; kwargs...
-)
- return visualize_sequence(f, tn[1], sequence; kwargs...)
-end
diff --git a/ITensorVisualizationBase/test/Project.toml b/ITensorVisualizationBase/test/Project.toml
deleted file mode 100644
index e8f5256f3b..0000000000
--- a/ITensorVisualizationBase/test/Project.toml
+++ /dev/null
@@ -1,7 +0,0 @@
-[deps]
-GeometryBasics = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
-Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
-ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
-LayeredLayouts = "f4a74d36-062a-4d48-97cd-1356bad1de4e"
-NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
-Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
diff --git a/ITensorVisualizationBase/test/runtests.jl b/ITensorVisualizationBase/test/runtests.jl
deleted file mode 100644
index 715d7ac85c..0000000000
--- a/ITensorVisualizationBase/test/runtests.jl
+++ /dev/null
@@ -1,14 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using Test
-
-starts_and_ends_with(file, st, en) = startswith(file, st) && endswith(file, en)
-starts_and_ends_with(st, en) = file -> starts_and_ends_with(file, st, en)
-
-test_path = joinpath(@__DIR__)
-test_files = filter(starts_and_ends_with("test_", ".jl"), readdir(test_path))
-@testset "ITensorVisualizationBase.jl" for file in test_files
- file_path = joinpath(test_path, file)
- println("Running test $(file_path)")
- include(file_path)
-end
diff --git a/ITensorVisualizationBase/test/test_basics.jl b/ITensorVisualizationBase/test/test_basics.jl
deleted file mode 100644
index 7530dda710..0000000000
--- a/ITensorVisualizationBase/test/test_basics.jl
+++ /dev/null
@@ -1,51 +0,0 @@
-using ITensors
-using ITensorVisualizationBase
-using Test
-
-@testset "Basic tests without any backend" begin
- N = 10
- s(n) = Index([QN("Sz", 0) => 1, QN("Sz", 1) => 1]; tags="S=1/2,Site,n=$n")
- l(n) = Index([QN("Sz", 0) => 10, QN("Sz", 1) => 10]; tags="Link,l=$n")
- h(n) = Index([QN("Sz", 0) => 5, QN("Sz", 1) => 5]; tags="ham,Link,l=$n")
- s⃗ = [s(n) for n in 1:N]
- l⃗ = [l(n) for n in 1:(N - 1)]
- h⃗ = [h(n) for n in 1:(N - 1)]
-
- # Add some more indices between two of the tensors
- x = Index([QN("Sz", 0) => 2]; tags="X")
- y = Index([QN("Sz", 0) => 2]; tags="Y")
-
- n = 2
- ψn1n2 = randomITensor(l⃗[n - 1], s⃗[n], s⃗[n + 1], l⃗[n + 1], dag(x), dag(y))
- hn1 = randomITensor(dag(h⃗[n - 1]), s⃗[n]', dag(s⃗[n]), h⃗[n], x, y)
- hn2 = randomITensor(dag(h⃗[n]), s⃗[n + 1]', dag(s⃗[n + 1]), h⃗[n + 1])
- ELn0 = randomITensor(l⃗[n - 1]', h⃗[n - 1], dag(l⃗[n - 1]))
- ERn2 = randomITensor(l⃗[n + 1]', dag(h⃗[n + 1]), dag(l⃗[n + 1]))
-
- tn = [ELn0, ψn1n2, hn1, hn2, ERn2]
-
- R = @visualize ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R1 = @visualize ELn0 * ψn1n2 * hn1
- R2 = @visualize R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
- tn2 = @visualize tn
- T = @visualize ELn0
-
- @test R ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- @test R1 ≈ ELn0 * ψn1n2 * hn1
- @test R2 ≈ ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- @test all(tn .== tn2)
- @test T == ELn0
-
- R = @visualize figR ELn0 * ψn1n2 * hn1 * hn2 * ERn2
- R1 = @visualize figR1 ELn0 * ψn1n2 * hn1
- R2 = @visualize figR2 R1 * hn2 * ERn2 vertex_labels = ["T1", "T2", "T3"]
- T = @visualize figT T
-
- fig_tn = @visualize_noeval tn
-
- @test isnothing(figR)
- @test isnothing(figR1)
- @test isnothing(figR2)
- @test isnothing(fig_tn)
- @test isnothing(figT)
-end
diff --git a/ITensorVisualizationBase/test/test_examples.jl b/ITensorVisualizationBase/test/test_examples.jl
deleted file mode 100644
index a406959f02..0000000000
--- a/ITensorVisualizationBase/test/test_examples.jl
+++ /dev/null
@@ -1,15 +0,0 @@
-using Test
-
-@testset "Examples" begin
- examples_path = joinpath(@__DIR__, "..", "examples")
- example_files = filter(starts_and_ends_with("ex_", ".jl"), readdir(examples_path))
- for file in example_files
- file_path = joinpath(examples_path, file)
- println("Testing file $(file_path)")
- empty!(ARGS)
- push!(ARGS, "false")
- res = include(file_path)
- @test isnothing(res) || all(isnothing, res)
- empty!(ARGS)
- end
-end
diff --git a/NDTensors/src/lib/CUDAExtensions/src/cuda.jl b/NDTensors/src/lib/CUDAExtensions/src/cuda.jl
index 44b8007672..7e94917a21 100644
--- a/NDTensors/src/lib/CUDAExtensions/src/cuda.jl
+++ b/NDTensors/src/lib/CUDAExtensions/src/cuda.jl
@@ -1,6 +1,6 @@
using NDTensors.TypeParameterAccessors: TypeParameterAccessors, Position
using NDTensors.GPUArraysCoreExtensions: storagemode
-# Implemented in `ITensorGPU` and NDTensorsCUDAExt
+# Implemented in NDTensorsCUDAExt
function cu end
## Here we need an CuArrayAdaptor because the CuArrayAdaptor provided by CUDA
diff --git a/NDTensors/src/lib/MetalExtensions/src/metal.jl b/NDTensors/src/lib/MetalExtensions/src/metal.jl
index 1e2855c7c3..ab15cd3499 100644
--- a/NDTensors/src/lib/MetalExtensions/src/metal.jl
+++ b/NDTensors/src/lib/MetalExtensions/src/metal.jl
@@ -1,12 +1,12 @@
using NDTensors.TypeParameterAccessors: TypeParameterAccessors, Position
using NDTensors.GPUArraysCoreExtensions: storagemode
-# Implemented in `ITensorGPU` and NDTensorsMetalExt
+# Implemented in NDTensorsMetalExt
function mtl end
## Here we need an MtlArrayAdaptor because the MtlArrayAdaptor provided by Metal
## converts 64 bit numbers to 32 bit. We cannot write `adapt(MtlArray, x)` because this
## Will not allow us to properly utilize the buffer preference without changing the value of
-## default_buffertype. Also `adapt(MtlArray{<:Any, <:Any, Buffertype})` fails to work properly
+## default_buffertype. Also `adapt(MtlArray{<:Any, <:Any, Buffertype})` fails to work properly
struct MtlArrayAdaptor{B} end
diff --git a/Project.toml b/Project.toml
index 1766c6bd56..d8378678ae 100644
--- a/Project.toml
+++ b/Project.toml
@@ -1,7 +1,7 @@
name = "ITensors"
uuid = "9136182c-28ba-11e9-034c-db9fb085ebd5"
authors = ["Matthew Fishman ", "Miles Stoudenmire "]
-version = "0.5.4"
+version = "0.5.8"
[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
diff --git a/docs/src/RunningOnGPUs.md b/docs/src/RunningOnGPUs.md
index a527f2d655..e86f5af708 100644
--- a/docs/src/RunningOnGPUs.md
+++ b/docs/src/RunningOnGPUs.md
@@ -37,7 +37,7 @@ Bmtl = mtl(B)
Amtl * Bmtl
```
-Note that we highly recommend using these new package extensions as opposed to [ITensorGPU.jl](https://github.com/ITensor/ITensors.jl/tree/main/ITensorGPU), which is ITensor's previous CUDA backend. The package extensions are better integrated into the main library so are more reliable and better supported right now. We plan to deprecate `ITensorGPU.jl` in the future.
+Note that we highly recommend using these new package extensions as opposed to [ITensorGPU.jl](https://github.com/ITensor/ITensorGPU.jl), which is ITensor's previous CUDA backend. The package extensions are better integrated into the main library so are more reliable and better supported right now. We plan to deprecate `ITensorGPU.jl` in the future.
## GPU backends
@@ -45,7 +45,7 @@ ITensor currently provides
package extensions for the following GPU backends:
* [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) (NVIDIA GPUs)
-* [cuTENSOR.jl] (https://github.com/JuliaGPU/CUDA.jl/tree/master/lib/cutensor) (`CUDA.jl` extension providing accelerated binary tensor contractions)
+* [cuTENSOR.jl] (https://github.com/JuliaGPU/CUDA.jl/tree/master/lib/cutensor) (`CUDA.jl` extension providing accelerated binary tensor contractions)
* [Metal.jl](https://github.com/JuliaGPU/Metal.jl) (Apple GPUs)
* [AMDGPU.jl](https://github.com/JuliaGPU/AMDGPU.jl) (AMD GPUs)