Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam
Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.derive_kappa_list — Methodderive_kappa_list(
ensembles::Vector{String},
multi_ensemble::String
) -> Vector{String}Derive a vector of $\kappa$ tokens (e.g., ["13570","13575",...]) from full ensemble names by stripping the common prefix multi_ensemble and a leading 'k'.
Example
multi_ensemble = "L8T4b1.60"ensembles = ["L8T4b1.60k13570","L8T4b1.60k13575"]=> ["13570","13575"]
Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.extract_all_observables_from_line — Functionextract_all_observables_from_line(
lines::Vector{String},
keyword::String,
jobid::Union{Nothing, String}=nothing
) -> Dict{Symbol, Float64}Parse a single summary line ending with the specified keyword and extract observable values.
The line must contain 12 numeric fields followed by a keyword, corresponding to the format: kappa_t_avg, kappa_t_err, cond_avg, cond_err, ..., bind_err keyword
Arguments
lines: Vector of strings (typically fromreadlines(filename)).keyword: Target keyword to match at the end of a line.jobid::Union{Nothing, String}: Optional job ID for contextual logging.
Returns
A dictionary mapping symbols like :cond_avg, :cond_err, etc., to their parsed Float64 values.
Errors
Throws an error if no matching line is found.
Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.load_all_nlsolve_status — Methodload_all_nlsolve_status(
labels::Vector{String},
trains::Vector{String},
path_template::Function;
solver_prefix::AbstractString = "nlsolve_f_solver_",
on_missing::Symbol = :warn
) -> Dict{String, Dict{String, Dict{String, NamedTuple{(:converged, :residual_norm, :iterations), Tuple{Bool, Float64, Int}}}}}Load NLsolve.jl convergence info for every (label, train) combo.
path_template must be a function (label::String, train::String) -> filepath::String that returns the infos TOML path.
Returns a nested dictionary: label => train => solver_name => (converged, residual_norm, iterations)
Keywords:
solver_prefix: Only collect tables underNLsolve.jlwhose names start with this prefix.on_missing: What to do if a file is missing or unreadable.:skip→ silently skip:warn→ print a warning and skip (default):error→ rethrow the error
Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.load_all_rw_data — Methodload_all_rw_data(
labels::Vector{String},
trains::Vector{String},
tags::Vector{Symbol},
path_template::Function
) -> Dict{String, Dict{String, Dict{Symbol, Dict{Symbol, Vector{Float64}}}}}Load all raw reweighting data for every (label, train, tag) combination.
This function uses a path_template(label, train, tag) to construct file paths, and delegates actual parsing to load_rw_data.
path_template
In Deborah.MiriamDocument.MiriamDocumentRunner.run_MiriamDocument, the path_template is usually defined as:
path_template = (label, train, tag) ->
"$(location)/$(analysis_ensemble)/$(cumulant_name)/" *
"$(overall_name)_LBP_$(label)_TRP_$(train)/" *
"$(String(tag))_$(overall_name)_LBP_$(label)_TRP_$(train).dat"This closure captures location, analysis_ensemble, cumulant_name, and overall_name from the surrounding scope, and generates a full path for each (label, train, tag) combination. For example:
path_template("Plaq", "Rect", :T_BS)
# → /.../<analysis_ensemble>/<cumulant_name>/<overall_name>_LBP_Plaq_TRP_Rect/T_BS_<overall_name>_LBP_Plaq_TRP_Rect.datArguments
labels: List ofLBPlabels (e.g.,["10", "20", ...]).trains: List ofTRPpercentages (e.g.,["0", "100"]).tags: List of tag symbols indicating file types (e.g.,:Y_BS,:RWP2).path_template: A function(label, train, tag) → filepath::String.
Returns
Nested dictionary: label => train => tag => observable => vector of Float64
Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.load_miriam_summary — Functionload_miriam_summary(
work::String,
analysis_ensemble::String,
cumulant_name::String,
overall_name::String,
labels::Vector{String},
trains::Vector{String},
keywords::Vector{String},
filetags::Vector{Symbol},
fields::Vector{Symbol},
jobid::Union{Nothing, String}=nothing
) -> Dict{Tuple{Symbol, Symbol, Symbol, String}, Array{Float64,2}}Load all observable summaries from .dat files for each (label, train, filetag, keyword) combination.
Each file is expected to contain labeled lines that end with a keyword (e.g., "susp", "skew"), and each such line encodes 12 observable values.
Arguments
work: Base working directory.analysis_ensemble: Name of the ensemble (e.g.,"L8T4b1.60").cumulant_name: Name of the observable bundle.overall_name: Global identifier for filename.labels: List of labeled set percentages.trains: List of training set percentages.keywords: Keywords indicating the interpolation criterion (e.g.,"susp").filetags: Tags for different prediction sources (e.g.,:RWBS,:RWP1).fields: Observable names (e.g.,:cond,:skew, etc.).jobid::Union{Nothing, String}: Optional job ID for contextual logging.
Returns
A nested dictionary indexed by (field, :avg|:err, filetag, keyword) mapping to a matrix of shape (length(labels), length(trains)).
Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.load_miriam_summary_for_measurement — Functionload_miriam_summary_for_measurement(
work::String,
analysis_ensemble::String,
group_name::String,
overall_name::String,
labels::Vector{String},
trains::Vector{String},
ensembles::Vector{String}, # full ensemble names
multi_ensemble::String, # common prefix to strip
filetags::Vector{Symbol}, # e.g. [:T_BS, :T_JK, :T_P1, :T_P2] or [:Q_BS, :Q_JK, :Q_P1, :Q_P2]
fields::Vector{Symbol}, # e.g. [:kappa, :trM1, :trM2, :trM3, :trM4] or [:kappa_t, :Q1, :Q2, :Q3, :Q4]
jobid::Union{Nothing,String}=nothing
) -> Tuple{
Dict{Tuple{Symbol, Symbol, Symbol, String}, Array{Float64,2}},
Vector{String}
}Load measurement results for all provided filetags (source-agnostic; no orig/pred split). For each (label, train, tag), this scans one .dat file row-wise and extracts observable (avg, err) values for all kappas implied by ensembles/multi_ensemble.
Directory and file layout: <work>/<analysis_ensemble>/<group_name>/<overall_name>_LBP_<label>_TRP_<train>/<tag>_<overall_name>_LBP_<label>_TRP_<train>.dat
Expected row format inside each file: kappa (val_key2 err_key2) (val_key3 err_key3) ...
Where fields = [:kappa or :kappa_t, :obs2, :obs3, ...]. Only fields[2:end] are stored as (avg, err) in the returned dictionary.
Arguments
labels,trains: define the(row, col)axes for the output matrices.ensembles,multi_ensemble: used to derivekappa_list::Vector{String}such as["13570", ...].filetags: complete set of measurement tags to load (e.g.,[:T_BS,:T_JK,:T_P1,:T_P2]).fields: first element must be:kappaor:kappa_t; the rest are observables to store.
Returns
A tuple:
Dict{(field, stat, tag, kappa_str) => Matrix{Float64}}of size(length(labels), length(trains)), wherefield$\in$fields[2:end],stat$\in$(:avg, :err),tag$\in$filetags,kappa_stris a token like"13580".kappa_list::Vector{String}in the same order used to fill the dictionary.
Example
summary_meas, kappa_list = load_miriam_summary_for_measurement(
work, analysis_ensemble, group_name, overall_name,
labels, trains, ensembles, multi_ensemble,
filetags, fields
)Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.load_nlsolve_status_from_info — Methodload_nlsolve_status_from_info(
filepath::String;
solver_prefix::AbstractString = "nlsolve_f_solver_"
) -> Dict{String, NamedTuple{(:converged, :residual_norm, :iterations), Tuple{Bool, Float64, Int}}}Parse a single infos_Miriam_...LBP_<label>_TRP_<train>.toml file and extract NLsolve.jl results per solver section.
It scans tables under NLsolve.jl whose names start with solver_prefix (e.g., nlsolve_f_solver_FULL-LBOG-ULOG) and returns, for each such table, a named tuple (converged, residual_norm, iterations).
convergedis parsed from the string"true"/"false"(case-insensitive).residual_normis parsed asFloat64.iterationsis parsed asInt.
Missing fields are handled as:
converged→ defaults to falseresidual_norm→ defaults toNaNiterations→ defaults to $-1$ (sentinel for unavailable)
Deborah.RebekahMiriam.SummaryLoaderRebekahMiriam.load_rw_data — Methodload_rw_data(
filepaths::Dict{Symbol,String}
) -> Dict{Symbol,Dict{Symbol,Vector{Float64}}}Parse reweighting data files and extract observable values with associated errors.
Each file is expected to have rows like: kappa cond cond_err susp susp_err skew skew_err kurt kurt_err bind bind_err
Lines marked with # are treated as headers. Lines following a # kappa_t marker are ignored (used to skip interpolation data).
Arguments
filepaths: Dictionary mapping tag symbols (e.g.,:RWP1) to file paths.
Returns
A nested dictionary: tag => (observable => vector of Float64 values). For each tag, returns vectors for :cond, :cond_err, ..., :bind_err.