Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Add quantum number support in TTNO constructor. #116

Merged
merged 34 commits into from
Jan 11, 2024
Merged
Changes from 1 commit
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
5ab9951
Add blocksparse Hamiltonian constructor for trees from OpSum, qn_svdTTN.
b-kloss Jan 8, 2024
e16f440
Fix bug in assert for onsite terms. Clean up comments.
b-kloss Jan 8, 2024
c62ed5c
Use on-site terms in test.
b-kloss Jan 8, 2024
a9e84a1
Format.
b-kloss Jan 8, 2024
ada435b
Remove debugging output from test.
b-kloss Jan 8, 2024
eb556ed
Add function barrier, similar to (qn_)svdMPO in ITensors.
b-kloss Jan 8, 2024
ee8a6c1
Process both QN and non-QN OpSum-to-TTN conversion via qn_svdTTN.
b-kloss Jan 8, 2024
7f3de41
Minor cosmetic edits.
b-kloss Jan 8, 2024
600364f
Accept suggested change for conversion to qnless ITensor.
b-kloss Jan 8, 2024
a5934c3
Rename ValType to coefficient_type.
b-kloss Jan 9, 2024
684f14d
Remove typing of IndsNetwork in function signatures..
b-kloss Jan 9, 2024
839de61
Remove parametric type C (OpSum{C}) from function signatures and bodies.
b-kloss Jan 9, 2024
8055abe
Enumerate kwargs in function signature instead of getting them in fun…
b-kloss Jan 9, 2024
79a2e26
Renaming and cosmetic edits.
b-kloss Jan 9, 2024
027c33b
Remove parametric type VT from signature. Changed OpCache key signature.
b-kloss Jan 9, 2024
28037f9
Generalize kwarg defaults.
b-kloss Jan 9, 2024
a11a4e0
Format.
b-kloss Jan 9, 2024
bc87d90
use vertextype(sites) in op_cache key signature.
b-kloss Jan 9, 2024
07051d8
Fix kwarg defaults.
b-kloss Jan 9, 2024
b0cd3f6
Actually remove parametric type VT from function signatures.
b-kloss Jan 9, 2024
ab4af0e
Remove finite state machine code due to lack of support for QNs.
b-kloss Jan 9, 2024
fe75dac
Remove deprecated code from opsum_to_ttn.jl. Also adapt TTN construct…
b-kloss Jan 9, 2024
764f4da
Restore old version as deprecated_opsum_to_ttn.jl
b-kloss Jan 9, 2024
8ae0c5b
Cosmetic edits, and remove an unused functions.
b-kloss Jan 9, 2024
3033544
Fix cutoff kwarg, remove commented out function.
b-kloss Jan 10, 2024
64e1c1b
Format and outdated comments.
b-kloss Jan 10, 2024
32b483d
Format.
b-kloss Jan 10, 2024
39c8c98
Add compatibility with TTNs with internal vertices that have no sitei…
b-kloss Jan 10, 2024
0a78604
Remove unnecessary comments.
b-kloss Jan 10, 2024
0235401
Merge branch 'main' into AutoTTNO
b-kloss Jan 10, 2024
a0e9fb4
Remove output from tests.
b-kloss Jan 10, 2024
6c3c0c9
Merge branch 'main' into AutoTTNO
b-kloss Jan 10, 2024
149372f
Use ITensorNetworks.model(...) instead of model(...) in test.
b-kloss Jan 10, 2024
66d9103
Remove @test_broken related to lack of QN implementation for TTN cons…
b-kloss Jan 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 37 additions & 10 deletions src/treetensornetworks/opsum_to_ttn.jl
Original file line number Diff line number Diff line change
Expand Up @@ -248,11 +248,23 @@ function qn_svdTTN(
root_vertex::VT;
kwargs...,
)::TTN where {C,VT}

mindim::Int = get(kwargs, :mindim, 1)
maxdim::Int = get(kwargs, :maxdim, 10000)
cutoff::Float64 = get(kwargs, :cutoff, 1e-15)

#ValType = ITensors.determineValType(ITensors.terms(os)) #now included as argument in function signature

# check for qns on the site indices
#FIXME: this check for whether or not any of the siteindices has QNs is somewhat ugly
#FIXME: how are sites handled where some sites have QNs and some don't?

thishasqns=false
for site in vertices(sites)
if hasqns(sites[site])
thishasqns=true
break
end
end
b-kloss marked this conversation as resolved.
Show resolved Hide resolved

# traverse tree outwards from root vertex
vs = reverse(post_order_dfs_vertices(sites, root_vertex)) # store vertices in fixed ordering relative to root
Expand All @@ -276,7 +288,9 @@ function qn_svdTTN(
)
op_cache[ITensors.which_op(st) => ITensors.site(st)] = op_tensor
end
q -= flux(op_tensor)
if !isnothing(flux(op_tensor))
q -= flux(op_tensor)
end
end
return q
end
Expand Down Expand Up @@ -511,7 +525,9 @@ function qn_svdTTN(
for ((b, q_op), m) in blocks
###FIXME: make this work if there's no physical degree of freedom at site
Op = computeSiteProd(sites, Prod(q_op)) ###FIXME: is this implemented?
(nnzblocks(Op) == 0) && continue ###FIXME: this one may be breaking for no physical indices on site
if hasqns(Op) ###FIXME: this may not be safe, we may want to check for the equivalent (zero tensor?) case in the dense case as well
(nnzblocks(Op) == 0) && continue ###FIXME: this one may be breaking for no physical indices on site
b-kloss marked this conversation as resolved.
Show resolved Hide resolved
end
sq = flux(Op)
if !isnothing(sq)
if ITensors.using_auto_fermion()
Expand All @@ -532,7 +548,12 @@ function qn_svdTTN(
end
T = ITensors.BlockSparseTensor(ValType, [b], _linkinds)
T[b] .= m
H[v] += (itensor(T) * Op)
if !thishasqns
iT=removeqns(itensor(T))
else
iT=itensor(T)
end
b-kloss marked this conversation as resolved.
Show resolved Hide resolved
H[v] += (iT * Op)
end

linkdims = dim.(linkinds)
Expand All @@ -554,6 +575,9 @@ function qn_svdTTN(
end
end
T = itensor(idT, _linkinds)
if !thishasqns
T=removeqns(T)
end
H[v] += T * ITensorNetworks.computeSiteProd(sites, Prod([(Op("Id", v))]))
end

Expand Down Expand Up @@ -679,7 +703,9 @@ function TTN(
os = deepcopy(os)
os = sorteachterm(os, sites, root_vertex)
os = ITensors.sortmergeterms(os) # not exported


T = qn_svdTTN(os, sites, root_vertex; kwargs...)
#=
if hasqns(first(first(vertex_data(sites))))
###added feature
T = qn_svdTTN(os, sites, root_vertex; kwargs...)
Expand All @@ -689,11 +715,12 @@ function TTN(
end
return T
end
T = svdTTN(os, sites, root_vertex; kwargs...)
if splitblocks
error("splitblocks not yet implemented for AbstractTreeTensorNetwork.")
T = ITensors.splitblocks(linkinds, T) # TODO: make this work
end
=#
#T = svdTTN(os, sites, root_vertex; kwargs...)
#if splitblocks
# error("splitblocks not yet implemented for AbstractTreeTensorNetwork.")
# T = ITensors.splitblocks(linkinds, T) # TODO: make this work
#end
return T
end

Expand Down