Compare commits

...

14 Commits

Author SHA1 Message Date
cf26983fe2 docs(version): version bump v0.7.6rc3.4 -> v0.7.6rc4.0
This bump includes all changes addressing ref comments
2026-04-20 12:43:52 -04:00
e704d5a1a7 feat(GridFire): updated 2026-04-20 12:42:23 -04:00
54d3ec9920 docs(gitignore): updated gitignore 2026-04-20 12:41:42 -04:00
b23f5a98c5 feat(examples): added FiPy example 2026-04-20 12:41:27 -04:00
bbd702904a feat(validation): added more of the scripts to make paper figures 2026-04-20 12:41:10 -04:00
3a22792fd1 fix(GridFire): changes based on ref report 2026-04-20 12:37:53 -04:00
f4d988fa25 feat(benchmarks): added memory and timing benchmarks 2026-04-15 08:23:34 -04:00
ec93720fa0 docs(gitignore): added trace and conanfile 2026-04-13 07:22:51 -04:00
5a1a904e71 refactor(GridFire): updated outputs 2026-04-13 07:19:18 -04:00
d1872cb65a docs(stubs): updated stubs 2026-04-13 07:18:41 -04:00
c311e4afbd test(vv): Added more scripts to verify GridFire behavior 2026-04-13 07:18:08 -04:00
84ff182717 feat(GridFire): Added a number of python hooks
python hooks to make getting base composition more reliable; further, a number of small changes made to aid in my analysis in response to ref report 1
2026-04-13 07:17:14 -04:00
65297852e5 feat(GF-Version): added auto version header
when building the version number is automatically injected into a header now. This allows for more certainty as to what GF version is being used. Note that this is disabled when building the python wheel as there is no clear way to map this dynamically generated header into the wheel structure. This is however not an issue as the python module has a seperate __version__ variable.
2026-04-09 07:45:00 -04:00
45af511db2 fix(Config-Py-Bindings): CVODESolver->PointSolver in python bindings
CVODESolver was renamed to PointSolver in the C++ source; however, the python source had not been updated. This has not been made consistent

BREAKING CHANGE:
2026-04-09 07:42:28 -04:00
113 changed files with 133857 additions and 3655 deletions

7
.gitignore vendored
View File

@@ -107,6 +107,8 @@ glaze.wrap
tomlplusplus.wrap
.vscode/
*.trace/
conanfile.py
*.log
mpi-install-log.txt
@@ -130,3 +132,8 @@ meson-boost-test/
cross/python_includes
*.whl
*.pdf
*pynuc.txt
*.dat
*pynucastro_network.py

View File

@@ -48,7 +48,7 @@ PROJECT_NAME = GridFire
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = v0.7.5rc3
PROJECT_NUMBER = v0.7.6rc4.0
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewers a

View File

@@ -152,25 +152,6 @@ the same for the other shared object file (make sure to count the duplicate rpat
We also include a script at `pip_install_mac_patch.sh` which will do this automatically for you.
## Automatic Build and Installation
### Script Build and Installation Instructions
The easiest way to build GridFire is using the `install.sh` or `install-tui.sh`
scripts in the root directory. To use these scripts, simply run:
```bash
./install.sh
# or
./install-tui.sh
```
The regular installation script will select a standard "ideal" set of build
options for you. If you want more control over the build options, you can use
the `install-tui.sh` script, which will provide a text-based user interface to
select the build options you want.
Generally, both are intended to be easy to use and will prompt you
automatically to install any missing dependencies.
### Currently, known good platforms
The installation script has been tested and found to work on clean
installations of the following platforms:
@@ -179,11 +160,6 @@ installations of the following platforms:
- Ubuntu 25.04 (aarch64)
- Ubuntu 22.04 (X86_64)
> **Note:** On Ubuntu 22.04 the user needs to install boost libraries manually
> as the versions in the Ubuntu repositories
> are too old. The installer automatically detects this and will instruct the
> user in how to do this.
## Manual Build Instructions
### Prerequisites
@@ -197,8 +173,6 @@ These only need to be manually installed if the user is not making use of the
- CMake 3.20 or newer
- ninja 1.10.0 or newer
- Python packages: `meson-python>=0.15.0`
- Boost libraries (>= 1.83.0) installed system-wide (or at least findable by
meson with pkg-config)
#### Optional
- dialog (used by the `install.sh` script, not needed if using pip or meson
@@ -206,15 +180,10 @@ These only need to be manually installed if the user is not making use of the
- pip (used by the `install.sh` script or by calling pip directly, not needed
if using meson directly)
> **Note:** Boost is the only external library dependency used by GridFire directly.
> **Note:** Windows is not supported at this time and *there are no plans to
> support it in the future*. Windows users are encouraged to use WSL2 or a
> Linux VM.
> **Note:** If `install-tui.sh` is not able to find a usable version of boost
> it will provide directions to fetch, compile, and install a usable version.
### Install Scripts
GridFire ships with an installer (`install.sh`) which is intended to make the
process of installation both easier and more repeatable.
@@ -447,7 +416,6 @@ likely to be one of adding new `EngineViews`.
| View Name | Purpose | Algorithm / Reference | When to Use |
|----------------------------------|-----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
| AdaptiveEngineView | Dynamically culls low-flow species and reactions during runtime | Iterative flux thresholding to remove reactions below a flow threshold | Large networks to reduce computational cost |
| DefinedEngineView | Restricts the network to a user-specified subset of species and reactions | Static network masking based on user-provided species/reaction lists | Targeted pathway studies or code-to-code comparisons |
| FileDefinedEngineView | Load a defined engine view from a file using some parser | Same as DefinedEngineView but loads from a file | Same as DefinedEngineView |
| MultiscalePartitioningEngineView | Partitions the network into fast and slow subsets based on reaction timescales | Network partitioning following Hix & Thielemann Silicon Burning I & II (DOI:10.1086/177016,10.1086/306692) | Stiff, multi-scale networks requiring tailored integration |
@@ -523,7 +491,7 @@ A `NetOut` struct contains
- The total specific energy lost to neutrinos while evolving to `tMax` (`NetOut::total_neutrino_loss`) [erg/g]
- The total flux of neutrinos while evolving to `tMax` (`NetOut::total_neutrino_flux`)
### CVODESolverStrategy
### PointSolver
We use the CVODE module from [SUNDIALS](https://computing.llnl.gov/projects/sundials/cvode) as our primary numerical
solver. Specifically we use the BDF linear multistep method from that which includes advanced adaptive timestepping.

151
benchmarks/Memory/main.cpp Normal file
View File

@@ -0,0 +1,151 @@
// ReSharper disable CppUnusedIncludeDirective
#include <iostream>
#include <fstream>
#include <chrono>
#include <thread>
#include <format>
#include <cppad/utility/thread_alloc.hpp> // Required for parallel_setup
#include "fourdst/logging/logging.h"
#include "fourdst/atomic/species.h"
#include "fourdst/composition/utils.h"
#include "quill/Logger.h"
#include "quill/Backend.h"
#include "CLI/CLI.hpp"
#include <clocale>
#include "gridfire/gridfire.h"
#include "fourdst/composition/composition.h"
#include "gridfire/utils/gf_omp.h"
#include <atomic>
#include <new>
#include <cstdlib>
static std::atomic<size_t> g_allocated_bytes{0};
void* operator new(std::size_t size) {
g_allocated_bytes += size;
if (void* ptr = std::malloc(size)) {
return ptr;
}
throw std::bad_alloc();
}
void operator delete(void* ptr, std::size_t size) {
g_allocated_bytes -= size;
std::free(ptr);
}
void operator delete(void* ptr) {
std::free(ptr);
}
struct MemoryScopeTracker {
size_t start_bytes;
MemoryScopeTracker() : start_bytes(g_allocated_bytes.load()) {}
size_t bytes_allocated() const {
return g_allocated_bytes.load() - start_bytes;
}
void reset_tracking() {
start_bytes = 0;
g_allocated_bytes = 0;
}
};
static std::terminate_handler g_previousHandler = nullptr;
void quill_terminate_handler();
gridfire::NetIn init(const double temp, const double rho, const double tMax) {
std::setlocale(LC_ALL, "");
g_previousHandler = std::set_terminate(quill_terminate_handler);
quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
logger->set_log_level(quill::LogLevel::Info);
using namespace gridfire;
const std::vector<double> X = {0.7081145999999999, 2.94e-5, 0.276, 0.003, 0.0011, 9.62e-3, 1.62e-3, 5.16e-4};
const std::vector<std::string> symbols = {"H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Ne-20", "Mg-24"};
const fourdst::composition::Composition composition = fourdst::composition::buildCompositionFromMassFractions(symbols, X);
NetIn netIn;
netIn.composition = composition;
netIn.temperature = temp;
netIn.density = rho;
netIn.energy = 0;
netIn.tMax = tMax;
netIn.dt0 = 1e-12;
return netIn;
}
void quill_terminate_handler()
{
quill::Backend::stop();
if (g_previousHandler)
g_previousHandler();
else
std::abort();
}
int main(int argc, char* argv[]) {
using namespace gridfire;
double temp = 1.5e7;
double rho = 1.6e2;
double tMax = 3e17;
std::string output_filename = "gf_mem.csv";
CLI::App app("GridFire Memory Benchmarks");
app.add_option("--temperature", temp, "Temperature in degrees")->default_val(std::format("{:5.2E}", temp));
app.add_option("--density", rho, "Density in Kg")->default_val(std::format("{:5.2E}", rho));
app.add_option("--tmax", tMax, "Maximum time in seconds")->default_val(std::format("{:5.2E}", tMax));
app.add_option("--output", output_filename, "Output filename for intermediate results")->default_val("gf_mem.csv");
CLI11_PARSE(app, argc, argv);
const NetIn netIn = init(temp, rho, tMax);
std::unique_ptr<engine::GraphEngine> engine;
int prev_reactions = 0;
int prev_species = 0;
engine = std::make_unique<engine::GraphEngine>(netIn.composition, 1);
MemoryScopeTracker tracker;
std::ofstream mem_file(output_filename, std::ios::out);
mem_file << "depth,species,reactions,engine_memory_bytes,solver_memory_bytes\n";
for (int depth = 1; depth <= 100; depth++) {
tracker.reset_tracking();
engine = std::make_unique<engine::GraphEngine>(netIn.composition, depth);
auto blob = engine->constructStateBlob();
size_t engine_usage = tracker.bytes_allocated();
size_t current_num_species = engine->getNetworkSpecies(*blob).size();
size_t current_num_reactions = engine->getNetworkReactions(*blob).size();
if (prev_reactions == current_num_reactions && prev_species == current_num_species) {
std::println("Found end of useful graph traversal at a depth of {}", depth);
break;
}
tracker.reset_tracking();
const solver::PointSolver localSolver(*engine);
solver::PointSolverContext solverCtx(*blob);
size_t solver_usage = tracker.bytes_allocated();
mem_file << std::format("{},{},{},{},{}\n", depth, current_num_species, current_num_reactions, engine_usage, solver_usage);
prev_reactions = current_num_reactions;
prev_species = current_num_species;
}
mem_file.close();
std::println("Memory benchmarks results written to {}", output_filename);
}

View File

@@ -0,0 +1,5 @@
executable(
'gf_bench_memory',
'main.cpp',
dependencies: [gridfire_dep, cli11_dep],
)

View File

@@ -0,0 +1,134 @@
// ReSharper disable CppUnusedIncludeDirective
#include <iostream>
#include <fstream>
#include <chrono>
#include <thread>
#include <format>
#include "gridfire/gridfire.h"
#include <cppad/utility/thread_alloc.hpp> // Required for parallel_setup
#include "fourdst/composition/composition.h"
#include "fourdst/logging/logging.h"
#include "fourdst/atomic/species.h"
#include "fourdst/composition/utils.h"
#include "quill/Logger.h"
#include "quill/Backend.h"
#include <clocale>
#include "gridfire/reaction/reaclib.h"
#include "gridfire/utils/gf_omp.h"
template <std::floating_point T>
[[nodiscard]] constexpr auto linspace(T start, T end, std::size_t num_points) -> std::vector<T> {
if (num_points == 0) {
return {};
}
if (num_points == 1) {
return {start};
}
return std::views::iota(0uz, num_points)
| std::views::transform([=](std::size_t i) -> T {
const T t = static_cast<T>(i) / static_cast<T>(num_points - 1);
return std::lerp(start, end, t);
})
| std::ranges::to<std::vector<T>>();
}
gridfire::NetIn init(const double temp, const double rho, const double tMax) {
std::setlocale(LC_ALL, "");
quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
logger->set_log_level(quill::LogLevel::TraceL2);
using namespace gridfire;
const std::vector<double> X = {0.7081145999999999, 2.94e-5, 0.276, 0.003, 0.0011, 9.62e-3, 1.62e-3, 5.16e-4};
const std::vector<std::string> symbols = {"H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Ne-20", "Mg-24"};
const fourdst::composition::Composition composition = fourdst::composition::buildCompositionFromMassFractions(symbols, X);
NetIn netIn;
netIn.composition = composition;
netIn.temperature = temp;
netIn.density = rho;
netIn.energy = 0;
netIn.tMax = tMax;
netIn.dt0 = 1e-12;
return netIn;
}
int main() {
GF_PAR_INIT()
using namespace gridfire;
constexpr double temp_init = 1.5e7;
constexpr double rho_init = 1.5e2;
constexpr double tMax = 3.1536e+12;
NetIn netIn = init(temp_init, rho_init, tMax);
policy::MainSequencePolicy stellarPolicy(netIn.composition);
const policy::ConstructionResults construct = stellarPolicy.construct();
std::println("Sandbox Engine Stack: {}", stellarPolicy);
std::println("Scratch Blob State: {}", *construct.scratch_blob);
// arrays to store timings
// Total number of interpolated data points
constexpr size_t N = 20;
std::array<double, N*N> eval_times{};
auto density = linspace(10.0, 5.0e2, N);
auto temperature = linspace(4e6,3e7, N);
solver::PointSolverContext solverCtx(*construct.scratch_blob);
solverCtx.set_stdout_logging(false);
solver::PointSolver solver(construct.engine);
auto startTime = std::chrono::high_resolution_clock::now();
size_t i = 0;
for (const auto temp : temperature) {
for (const auto dens : density) {
std::println("Evaluation {:3}/{:5} ({:3.0f}%): ρ = {:10.4E}, T = {:10.4E}", i + 1, N*N, 100.0*((static_cast<double>(i)+1.0)/(N*N)), dens, temp);
netIn.temperature = temp;
netIn.density = dens;
try {
auto start_eval_time = std::chrono::high_resolution_clock::now();
const NetOut netOut = solver.evaluate(solverCtx, netIn);
auto end_eval_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> eval_elapsed = end_eval_time - start_eval_time;
eval_times[i] = eval_elapsed.count();
} catch (const gridfire::exceptions::GridFireError& e) {
std::cerr << "Error during evaluation " << (i + 1) << ": " << e.what() << std::endl;
eval_times[i] = std::numeric_limits<double>::quiet_NaN();
}
i++;
}
}
auto endTime = std::chrono::high_resolution_clock::now();
std::println("Total time for {} evaluations: {} seconds", N, (endTime - startTime).count());
for (size_t j = 0; j < static_cast<size_t>(N*N); ++j) {
std::println("Evaluation {}: {} seconds", j + 1, eval_times[j]);
}
std::ofstream outfile("gf_wall_vs_temp_results.csv");
outfile << "Evaluation,Density,Temperature,TimeSeconds\n";
size_t j = 0;
for (const auto temp: temperature) {
for (const auto dens: density ) {
outfile << (j + 1) << "," << dens << ","<< temp << "," << eval_times[j] << "\n";
j++;
}
}
}

View File

@@ -15,11 +15,15 @@
#include "quill/Logger.h"
#include "quill/Backend.h"
#include "nlohmann/json.hpp"
#include <clocale>
#include <sys/utsname.h>
#include "gridfire/reaction/reaclib.h"
#include "gridfire/utils/gf_omp.h"
#include "gridfire/utils/config.h"
gridfire::NetIn init(const double temp, const double rho, const double tMax) {
std::setlocale(LC_ALL, "");
@@ -63,97 +67,157 @@ int main() {
std::println("Scratch Blob State: {}", *construct.scratch_blob);
constexpr size_t runs = 10;
auto startTime = std::chrono::high_resolution_clock::now();
constexpr size_t runs = 100;
nlohmann::json results;
nlohmann::json metadata;
// arrays to store timings
std::array<std::chrono::duration<double>, runs> setup_times;
std::array<std::chrono::duration<double>, runs> eval_times;
std::array<NetOut, runs> serial_results;
for (size_t i = 0; i < runs; ++i) {
auto start_setup_time = std::chrono::high_resolution_clock::now();
solver::PointSolverContext solverCtx(*construct.scratch_blob);
solverCtx.set_stdout_logging(false);
solver::PointSolver solver(construct.engine);
auto end_setup_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> setup_elapsed = end_setup_time - start_setup_time;
setup_times[i] = setup_elapsed;
const auto now = std::chrono::system_clock::now();
std::string now_str = std::format("{:%Y-%m-%d %H:%M:%S}", now);
auto start_eval_time = std::chrono::high_resolution_clock::now();
const NetOut netOut = solver.evaluate(solverCtx, netIn);
auto end_eval_time = std::chrono::high_resolution_clock::now();
serial_results[i] = netOut;
std::chrono::duration<double> eval_elapsed = end_eval_time - start_eval_time;
eval_times[i] = eval_elapsed;
}
auto endTime = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> elapsed = endTime - startTime;
std::println("");
metadata["Datetime"] = now_str;
metadata["GF_Version"] = version::toString();
// Summarize serial timings
double total_setup_time = 0.0;
double total_eval_time = 0.0;
for (size_t i = 0; i < runs; ++i) {
total_setup_time += setup_times[i].count();
total_eval_time += eval_times[i].count();
}
std::println("Average Setup Time over {} runs: {:.6f} seconds", runs, total_setup_time / runs);
std::println("Average Evaluation Time over {} runs: {:.6f} seconds", runs, total_eval_time / runs);
std::println("Total Time for {} runs: {:.6f} seconds", runs, elapsed.count());
std::array<NetOut, runs> parallelResults;
std::array<std::chrono::duration<double>, runs> setupTimes;
std::array<std::chrono::duration<double>, runs> evalTimes;
std::array<std::unique_ptr<gridfire::engine::scratch::StateBlob>, runs> workspaces;
for (size_t i = 0; i < runs; ++i) {
workspaces[i] = construct.scratch_blob->clone_structure();
utsname buffer{};
if (uname(&buffer) == 0) {
std::string osName = buffer.sysname;
#ifdef __APPLE__
if (osName == "Darwin") osName = "macOS";
#endif
metadata["OS"] = osName;
metadata["OS Version"] = buffer.release;
metadata["Architecture"] = buffer.machine;
} else {
metadata["OS"] = "Unknown";
}
#if defined(__clang__)
metadata["Compiler"] = "Clang " __clang_version__;
#elif defined(__GNUC__)
metadata["Compiler"] = "GCC " __VERSION__;
#else
metadata["Compiler"] = "Unknown";
#endif
// Parallel runs
startTime = std::chrono::high_resolution_clock::now();
// metadata["Threads"] = omp_get_max_threads();
metadata["Runs"] = runs;
metadata["Temperature"] = temp;
metadata["Density"] = rho;
metadata["tMax_per_run_s"] = tMax;
GF_OMP(parallel for, for (size_t i = 0; i < runs; ++i)) {
auto start_setup_time = std::chrono::high_resolution_clock::now();
solver::PointSolverContext solverCtx(*construct.scratch_blob);
solverCtx.set_stdout_logging(false);
solver::PointSolver solver(construct.engine);
auto end_setup_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> setup_elapsed = end_setup_time - start_setup_time;
setupTimes[i] = setup_elapsed;
auto start_eval_time = std::chrono::high_resolution_clock::now();
parallelResults[i] = solver.evaluate(solverCtx, netIn);
auto end_eval_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> eval_elapsed = end_eval_time - start_eval_time;
evalTimes[i] = eval_elapsed;
}
endTime = std::chrono::high_resolution_clock::now();
elapsed = endTime - startTime;
std::println("");
results["Metadata"] = metadata;
// Summarize parallel timings
total_setup_time = 0.0;
total_eval_time = 0.0;
for (size_t i = 0; i < runs; ++i) {
total_setup_time += setupTimes[i].count();
total_eval_time += evalTimes[i].count();
for (size_t rID = 0; rID < runs; rID++) {
nlohmann::json run_result;
nlohmann::json run_metadata;
run_metadata["num_zones"] = rID;
run_result["metadata"] = run_metadata;
auto startTime = std::chrono::high_resolution_clock::now();
// arrays to store timings
std::array<std::chrono::duration<double>, runs> setup_times{};
std::array<std::chrono::duration<double>, runs> eval_times{};
std::array<NetOut, runs> serial_results;
for (size_t i = 0; i < rID; ++i) {
auto start_setup_time = std::chrono::high_resolution_clock::now();
solver::PointSolverContext solverCtx(*construct.scratch_blob);
solverCtx.set_stdout_logging(false);
solver::PointSolver solver(construct.engine);
auto end_setup_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> setup_elapsed = end_setup_time - start_setup_time;
setup_times[i] = setup_elapsed;
auto start_eval_time = std::chrono::high_resolution_clock::now();
const NetOut netOut = solver.evaluate(solverCtx, netIn);
auto end_eval_time = std::chrono::high_resolution_clock::now();
serial_results[i] = netOut;
std::chrono::duration<double> eval_elapsed = end_eval_time - start_eval_time;
eval_times[i] = eval_elapsed;
}
auto endTime = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> elapsed = endTime - startTime;
std::println("");
nlohmann::json point_solver_time_results;
point_solver_time_results["total_time_s"] = elapsed.count();
run_result["Serial"] = point_solver_time_results;
// Summarize serial timings
double total_setup_time = 0.0;
double total_eval_time = 0.0;
for (size_t i = 0; i < rID; ++i) {
total_setup_time += setup_times[i].count();
total_eval_time += eval_times[i].count();
}
std::println("Average Setup Time over {} runs: {:.6f} seconds", runs, total_setup_time / runs);
std::println("Average Evaluation Time over {} runs: {:.6f} seconds", runs, total_eval_time / runs);
std::println("Total Time for {} runs: {:.6f} seconds", runs, elapsed.count());
std::array<NetOut, runs> parallelResults;
std::array<std::chrono::duration<double>, runs> setupTimes;
std::array<std::chrono::duration<double>, runs> evalTimes;
std::array<std::unique_ptr<gridfire::engine::scratch::StateBlob>, runs> workspaces;
for (size_t i = 0; i < rID; ++i) {
workspaces[i] = construct.scratch_blob->clone_structure();
}
// Parallel runs
startTime = std::chrono::high_resolution_clock::now();
GF_OMP(parallel for, for (size_t i = 0; i < rID; ++i)) {
auto start_setup_time = std::chrono::high_resolution_clock::now();
solver::PointSolverContext solverCtx(*construct.scratch_blob);
solverCtx.set_stdout_logging(false);
solver::PointSolver solver(construct.engine);
auto end_setup_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> setup_elapsed = end_setup_time - start_setup_time;
setupTimes[i] = setup_elapsed;
auto start_eval_time = std::chrono::high_resolution_clock::now();
parallelResults[i] = solver.evaluate(solverCtx, netIn);
auto end_eval_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> eval_elapsed = end_eval_time - start_eval_time;
evalTimes[i] = eval_elapsed;
}
endTime = std::chrono::high_resolution_clock::now();
elapsed = endTime - startTime;
std::println("");
nlohmann::json grid_solver_results;
grid_solver_results["total_time_s"] = elapsed.count();
run_result["Parallel"] = grid_solver_results;
// Summarize parallel timings
total_setup_time = 0.0;
total_eval_time = 0.0;
for (size_t i = 0; i < runs; ++i) {
total_setup_time += setupTimes[i].count();
total_eval_time += evalTimes[i].count();
}
std::println("Average Parallel Setup Time over {} runs: {:.6f} seconds", runs, total_setup_time / runs);
std::println("Average Parallel Evaluation Time over {} runs: {:.6f} seconds", runs, total_eval_time / runs);
std::println("Total Parallel Time for {} runs: {:.6f} seconds", runs, elapsed.count());
std::println("========== Summary ==========");
std::println("Serial Runs:");
std::println(" Average Setup Time: {:.6f} seconds", total_setup_time / runs);
std::println(" Average Evaluation Time: {:.6f} seconds", total_eval_time / runs);
std::println("Parallel Runs:");
std::println(" Average Setup Time: {:.6f} seconds", total_setup_time / runs);
std::println(" Average Evaluation Time: {:.6f} seconds", total_eval_time / runs);
std::println("Difference:");
std::println(" Setup Time Difference: {:.6f} seconds", (total_setup_time / runs) - (total_setup_time / runs));
std::println(" Evaluation Time Difference: {:.6f} seconds", (total_eval_time / runs) - (total_eval_time / runs));
std::println(" Setup Time Fractional Difference: {:.2f}%", ((total_setup_time / runs) - (total_setup_time / runs)) / (total_setup_time / runs) * 100.0);
std::println(" Evaluation Time Fractional Difference: {:.2f}%", ((total_eval_time / runs) - (total_eval_time / runs)) / (total_eval_time / runs) * 100.0);
results[std::format("Run_{}", rID)] = run_result;
}
std::println("Average Parallel Setup Time over {} runs: {:.6f} seconds", runs, total_setup_time / runs);
std::println("Average Parallel Evaluation Time over {} runs: {:.6f} seconds", runs, total_eval_time / runs);
std::println("Total Parallel Time for {} runs: {:.6f} seconds", runs, elapsed.count());
std::println("========== Summary ==========");
std::println("Serial Runs:");
std::println(" Average Setup Time: {:.6f} seconds", total_setup_time / runs);
std::println(" Average Evaluation Time: {:.6f} seconds", total_eval_time / runs);
std::println("Parallel Runs:");
std::println(" Average Setup Time: {:.6f} seconds", total_setup_time / runs);
std::println(" Average Evaluation Time: {:.6f} seconds", total_eval_time / runs);
std::println("Difference:");
std::println(" Setup Time Difference: {:.6f} seconds", (total_setup_time / runs) - (total_setup_time / runs));
std::println(" Evaluation Time Difference: {:.6f} seconds", (total_eval_time / runs) - (total_eval_time / runs));
std::println(" Setup Time Fractional Difference: {:.2f}%", ((total_setup_time / runs) - (total_setup_time / runs)) / (total_setup_time / runs) * 100.0);
std::println(" Evaluation Time Fractional Difference: {:.2f}%", ((total_eval_time / runs) - (total_eval_time / runs)) / (total_eval_time / runs) * 100.0);
}
std::ofstream o("gf_single_zone_solver_benchmark_results.json");
o << std::setw(4) << results << std::endl;
o.close();
}

View File

@@ -3,3 +3,9 @@ executable(
'main.cpp',
dependencies: [gridfire_dep],
)
executable(
'gf_wall_vs_temp',
'gf_wall_vs_temp.cpp',
dependencies: [gridfire_dep]
)

147
benchmarks/Timing/main.cpp Normal file
View File

@@ -0,0 +1,147 @@
// ReSharper disable CppUnusedIncludeDirective
#include <iostream>
#include <fstream>
#include <chrono>
#include <thread>
#include <format>
#include "fourdst/logging/logging.h"
#include "fourdst/atomic/species.h"
#include "fourdst/composition/utils.h"
#include "quill/Logger.h"
#include "quill/Backend.h"
#include "CLI/CLI.hpp"
#include <clocale>
#include "gridfire/gridfire.h"
#include "fourdst/composition/composition.h"
#include "gridfire/utils/gf_omp.h"
#include <atomic>
#include <new>
#include <cstdlib>
static std::terminate_handler g_previousHandler = nullptr;
void quill_terminate_handler();
gridfire::NetIn init(const double temp, const double rho, const double tMax) {
std::setlocale(LC_ALL, "");
g_previousHandler = std::set_terminate(quill_terminate_handler);
quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
logger->set_log_level(quill::LogLevel::Info);
using namespace gridfire;
const std::vector<double> X = {0.7081145999999999, 2.94e-5, 0.276, 0.003, 0.0011, 9.62e-3, 1.62e-3, 5.16e-4};
const std::vector<std::string> symbols = {"H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Ne-20", "Mg-24"};
const fourdst::composition::Composition composition = fourdst::composition::buildCompositionFromMassFractions(symbols, X);
NetIn netIn;
netIn.composition = composition;
netIn.temperature = temp;
netIn.density = rho;
netIn.energy = 0;
netIn.tMax = tMax;
netIn.dt0 = 1e-12;
return netIn;
}
void quill_terminate_handler()
{
quill::Backend::stop();
if (g_previousHandler)
g_previousHandler();
else
std::abort();
}
int main(int argc, char* argv[]) {
using namespace gridfire;
double temp = 1.5e7;
double rho = 1.6e2;
double tMax = 3e17;
std::string output_filename = "gridfire_timings.csv";
CLI::App app("GridFire Timeing Benchmarks");
app.add_option("--temperature", temp, "Temperature in degrees")->default_val(std::format("{:5.2E}", temp));
app.add_option("--density", rho, "Density in Kg")->default_val(std::format("{:5.2E}", rho));
app.add_option("--tmax", tMax, "Maximum time in seconds")->default_val(std::format("{:5.2E}", tMax));
app.add_option("--output", output_filename, "Output filename for intermediate results")->default_val("gridfire_timings.csv");
CLI11_PARSE(app, argc, argv);
const NetIn netIn = init(temp, rho, tMax);
std::unique_ptr<engine::GraphEngine> engine;
struct TimingInfo {
double depth;
int num_reactions;
int num_species;
double timing;
};
std::vector<TimingInfo> timings;
int prev_reactions = 0;
int prev_species = 0;
engine = std::make_unique<engine::GraphEngine>(netIn.composition, 1);
for (int depth = 1; depth <= 100; depth++) {
engine = std::make_unique<engine::GraphEngine>(netIn.composition, depth);
auto blob = engine->constructStateBlob();
TimingInfo info;
info.depth = depth;
info.num_species = engine->getNetworkSpecies(*blob).size();
info.num_reactions = engine->getNetworkReactions(*blob).size();
if (prev_reactions == info.num_reactions && prev_species == info.num_species) {
std::println("Found end of useful graph traversal at a depth of {}", depth);
break;
}
const solver::PointSolver localSolver(*engine);
solver::PointSolverContext solverCtx(*blob);
solverCtx.stdout_logging = true;
try {
auto start = std::chrono::high_resolution_clock::now();
auto result = localSolver.evaluate(solverCtx, netIn, false, false);
auto end = std::chrono::high_resolution_clock::now();
double ns = std::chrono::duration<double>(end - start).count();
info.timing = ns;
prev_reactions = info.num_reactions;
prev_species = info.num_species;
timings.push_back(info);
} catch (gridfire::exceptions::CVODESolverFailureError& e) {
continue;
}
}
std::ofstream csvFile(output_filename, std::ios::out);
csvFile << std::format("# Temperature (K): {}", temp);
csvFile << std::format("# Density: {}", rho);
csvFile << std::format("# TMax: {}", tMax);
csvFile << "depth,reactions,species,time\n";
for (const auto& [depth, numReactions, numSpecies, ns]: timings) {
std::string line = std::format("{},{},{},{}\n", depth, numReactions, numSpecies, ns);
csvFile << line;
}
csvFile.close();
std::println("Timeing Benchmarks results written to {}", output_filename);
}

View File

@@ -0,0 +1,5 @@
executable(
'gf_bench_timeing',
'main.cpp',
dependencies: [gridfire_dep, cli11_dep],
)

View File

@@ -1,3 +1,5 @@
if get_option('build_benchmarks')
subdir('SingleZoneSolver')
subdir('Memory')
subdir('Timing')
endif

View File

@@ -8,7 +8,8 @@ cvode_cmake_options.add_cmake_defines({
'BUILD_STATIC_LIBS' : 'ON',
'EXAMPLES_ENABLE_C' : 'OFF',
'CMAKE_POSITION_INDEPENDENT_CODE': true,
'CMAKE_PLATFORM_NO_VERSIONED_SONAME': 'ON'
'CMAKE_PLATFORM_NO_VERSIONED_SONAME': 'ON',
'SUNDIALS_LOGGING_LEVEL': 1
})

View File

@@ -9,7 +9,8 @@ kinsol_cmake_options.add_cmake_defines({
'BUILD_STATIC_LIBS' : 'ON',
'EXAMPLES_ENABLE_C' : 'OFF',
'CMAKE_POSITION_INDEPENDENT_CODE': true,
'CMAKE_PLATFORM_NO_VERSIONED_SONAME': 'ON'
'CMAKE_PLATFORM_NO_VERSIONED_SONAME': 'ON',
'SUNDIALS_LOGGING_LEVEL': 1
})
kinsol_cmake_options.add_cmake_defines({

4
examples/fipy/readme.md Normal file
View File

@@ -0,0 +1,4 @@
# Example Diffusion
Simple GridFire case with FiPy used for spatial diffusion
To run this you must have matplotlib, gridfire, and fipy installed in your python enviroment

155
examples/fipy/run.py Normal file
View File

@@ -0,0 +1,155 @@
import numpy as np
import fipy as fp
import gridfire as gf
import matplotlib.pyplot as plt
from gridfire.type import NetIn
import fourdst
def main():
Y_solar = [7.0262E-01, 9.7479E-06, 6.8955E-02, 2.5000E-04, 7.8554E-05, 6.0144E-04, 8.1031E-05, 2.1513E-05]
S = ["H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Ne-20", "Mg-24"]
base_comp = fourdst.composition.Composition(S, Y_solar)
stellar_policy = gf.policy.MainSequencePolicy(base_comp)
construct = stellar_policy.construct()
point_solver = gf.solver.PointSolver(construct.engine)
grid_solver = gf.solver.GridSolver(construct.engine, point_solver)
solver_ctx = gf.solver.GridSolverContext(construct.scratch_blob)
solver_ctx.zone_completion_logging = False
species_list = construct.engine.getNetworkSpecies(construct.scratch_blob)
plot_species = ["H-1", "He-4", "C-12", "O-16", "N-14", "Mg-24"]
nx = 100
dx = 1.0
mesh = fp.Grid1D(nx=nx, dx=dx)
center_idx = nx // 2
T9 = fp.CellVariable(name="Temperature (T9)", mesh=mesh, value=0.015)
x = mesh.cellCenters[0]
T9.setValue(0.015 + 0.005 * fp.numerix.exp(-((x - (nx*dx/2))**2) / 50.0))
species_vars = {}
for i, sp in enumerate(species_list):
sp_name = sp.name()
val = base_comp.getMolarAbundance(sp) if base_comp.contains(sp) else 0.0
species_vars[sp_name] = fp.CellVariable(name=sp_name, mesh=mesh, value=val)
D_T = 1e4
D_Y = 1e2
eq_T = fp.TransientTerm() == fp.DiffusionTerm(coeff=D_T)
eqs_Y = {sp_name: fp.TransientTerm() == fp.DiffusionTerm(coeff=D_Y) for sp_name in species_vars}
dt = 0.1
t_final = 3e14
t = 0.0
rho_const = 160.0
cp_const = 2.0e8
plt.ion()
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(10, 12))
line_T, = ax1.plot(mesh.cellCenters[0], T9.value, label="T9 (Billion K)", color='red')
ax1.set_ylabel("Temperature (T9)")
ax1.legend()
lines_Y = {}
colors = plt.cm.tab10(np.linspace(0, 1, len(plot_species)))
for sp_name, color in zip(plot_species, colors):
if sp_name in species_vars:
lines_Y[sp_name], = ax2.plot(mesh.cellCenters[0], species_vars[sp_name].value, label=sp_name, color=color)
ax2.set_ylabel("Molar Abundance")
ax2.set_xlabel("Zone Index")
ax2.set_yscale('log')
ax2.set_ylim(1e-12, 1.5)
ax2.legend(loc='center left', bbox_to_anchor=(1, 0.5))
time_history = []
history_vars = {sp: [] for sp in plot_species}
lines_time = {}
for sp_name, color in zip(plot_species, colors):
lines_time[sp_name], = ax3.plot([], [], label=f"Total {sp_name}", color=color, linewidth=2)
ax3.set_xlabel("Time (s)")
ax3.set_ylabel("Total Moles (mol/cm$^2$)")
ax3.set_xscale('log')
ax3.set_yscale('log')
ax3.set_xlim(1e-1, t_final)
ax3.set_ylim(1e-5, 1e5)
ax3.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.tight_layout()
while t < t_final:
for sp_name, sp_var in species_vars.items():
eqs_Y[sp_name].solve(var=sp_var, dt=dt)
net_ins = []
T9_array = T9.value
for i in range(nx):
net_in = NetIn()
net_in.temperature = max(1e6, T9_array[i] * 1e9) # Convert T9 back to Kelvin
net_in.density = rho_const
net_in.tMax = dt
local_Y = [max(0.0, species_vars[sp.name()].value[i]) for sp in species_list]
net_in.composition = fourdst.composition.Composition(
[sp.name() for sp in species_list], local_Y
)
net_ins.append(net_in)
results = grid_solver.evaluate(solver_ctx, net_ins)
for i in range(nx):
for sp in species_list:
sp_name = sp.name()
if results[i].composition.contains(sp):
species_vars[sp_name].value[i] = results[i].composition.getMolarAbundance(sp)
time_history.append(max(t, 1e-5))
for sp_name in plot_species:
if sp_name in species_vars:
tot_moles = np.sum(species_vars[sp_name].value * rho_const * dx)
history_vars[sp_name].append(tot_moles)
line_T.set_ydata(T9_array)
for sp_name in plot_species:
if sp_name in species_vars:
lines_Y[sp_name].set_ydata(species_vars[sp_name].value)
lines_time[sp_name].set_data(time_history, history_vars[sp_name])
ax1.set_ylim(min(T9_array)*0.95, max(T9_array)*1.05)
valid_mins = []
valid_maxs = []
for sp_name in plot_species:
if len(history_vars[sp_name]) > 0:
arr = np.array(history_vars[sp_name])
pos_arr = arr[arr > 0]
if len(pos_arr) > 0:
valid_mins.append(np.min(pos_arr))
valid_maxs.append(np.max(arr))
if valid_mins and valid_maxs:
ax3.set_ylim(min(valid_mins) * 0.5, max(valid_maxs) * 2.0)
if t > 1e-1:
ax3.set_xlim(1e-1, max(t * 1.5, 1.0))
fig.canvas.draw()
fig.canvas.flush_events()
t += dt
print(f"Time: {t:.2e}s | Center Temp: {T9.value[center_idx]*1e9:.2e} K | dt: {dt:.2e}s")
dt = min(dt * 2, 5e12)
plt.ioff()
plt.show()
if __name__ == "__main__":
main()

View File

@@ -18,7 +18,7 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# *********************************************************************** #
project('GridFire', ['c', 'cpp'], version: 'v0.7.6rc3.4', default_options: ['cpp_std=c++23'], meson_version: '>=1.5.0')
project('GridFire', ['c', 'cpp'], version: 'v0.7.6rc4.0', default_options: ['cpp_std=c++23'], meson_version: '>=1.5.0')
gridfire_args = []

View File

@@ -3,13 +3,25 @@
#include "fourdst/config/config.h"
namespace gridfire::config {
struct CVODESolverConfig {
struct BoundaryFluxConfig {
double relativeThreshold = 3e-8;
double absoluteThreshold = 1e-24;
};
struct TriggerConfig {
double offDiagonalThreshold = 1e10;
double timestepCollapseRatio = 0.5;
double maxConvergenceFailures = 2;
BoundaryFluxConfig boundaryFlux;
};
struct PointSolverConfig {
double absTol = 1.0e-8;
double relTol = 1.0e-5;
TriggerConfig trigger;
};
struct SolverConfig {
CVODESolverConfig cvode;
PointSolverConfig pointSolver;
};
struct AdaptiveEngineViewConfig {

View File

@@ -8,6 +8,8 @@
#include "gridfire/engine/types/reporting.h"
#include "gridfire/engine/types/jacobian.h"
#include "gridfire/exceptions/error_engine.h"
#include "gridfire/engine/scratchpads/blob.h"
#include "fourdst/composition/composition_abstract.h"
@@ -183,6 +185,7 @@ namespace gridfire::engine {
/**
* @brief Generate the Jacobian matrix for the current state.
*
* @param ctx The scratchpad context for the current state.
* @param comp Composition object containing current abundances.
* @param T9 Temperature in units of 10^9 K.
* @param rho Density in g/cm^3.
@@ -200,6 +203,7 @@ namespace gridfire::engine {
/**
* @brief Generate the Jacobian matrix for the current state using a subset of active species.
*
* @param ctx The scratchpad context for the current state.
* @param comp Composition object containing current abundances.
* @param T9 Temperature in units of 10^9 K.
* @param rho Density in g/cm^3.
@@ -221,6 +225,7 @@ namespace gridfire::engine {
/**
* @brief Generate the Jacobian matrix for the current state with a specified sparsity pattern.
*
* @param ctx Get the scratchpad context for the current state.
* @param comp Composition object containing current abundances.
* @param T9 Temperature in units of 10^9 K.
* @param rho Density in g/cm^3.
@@ -245,6 +250,7 @@ namespace gridfire::engine {
/**
* @brief Calculate the molar reaction flow for a given reaction.
*
* @param ctx The scratchpad context for the current state.
* @param reaction The reaction for which to calculate the flow.
* @param comp Composition object containing current abundances.
* @param T9 Temperature in units of 10^9 K.
@@ -289,6 +295,39 @@ namespace gridfire::engine {
scratch::StateBlob& ctx
) const = 0;
/**
* @brief Get the set of inactive reactions in the network.
*
* @return ReactionSet containing all inactive reactions.
*
* By default, this method returns an empty set. Derived classes can override
* this method to provide the actual set of inactive reactions based on their
* internal logic (e.g., reaction flow culling, QSE partitioning).
*/
[[nodiscard]] virtual reaction::ReactionSet getInactiveNetworkReactions(
scratch::StateBlob &ctx
) const {
return reaction::ReactionSet{};
}
[[nodiscard]] virtual double getInactiveReactionMolarReactionFlow(
scratch::StateBlob& ctx,
const reaction::Reaction &reaction,
const fourdst::composition::CompositionAbstract &comp,
const double T9,
const double rho
) const {
std::string warning_msg = std::format(
"[GridFire Warning ({}, {}, {})]: Engine of type '{}' does not implement getInactiveReactionMolarReactionFlow. Returning 0.0 flow for reaction '{}'.",
__FILE__,
__LINE__,
__FUNCTION__,
typeid(*this).name(),
reaction.id()
);
return 0.0;
}
/**
* @brief Compute timescales for all species in the network.
@@ -311,6 +350,7 @@ namespace gridfire::engine {
/**
* @brief Compute destruction timescales for all species in the network.
*
* @param ctx The scratchpad context for the current state.
* @param comp Composition object containing current abundances.
* @param T9 Temperature in units of 10^9 K.
* @param rho Density in g/cm^3.
@@ -329,6 +369,7 @@ namespace gridfire::engine {
/**
* @brief Update the thread local scratch pad state of a network.
*
* @param ctx The scratchpad context for the current state.
* @param netIn A struct containing the current network input, such as
* temperature, density, and composition.
*
@@ -354,6 +395,8 @@ namespace gridfire::engine {
/**
* @brief Get the current electron screening model.
*
* @param ctx The scratchpad context for the current state.
*
* @return The currently active screening model type.
*
* @par Usage Example:
@@ -368,6 +411,7 @@ namespace gridfire::engine {
/**
* @brief Get the index of a species in the network.
*
* @param ctx The scratchpad context for the current state.
* @param species The species to look up.
*
* This method allows querying the index of a specific species in the
@@ -382,6 +426,7 @@ namespace gridfire::engine {
/**
* @brief Prime the engine with initial conditions.
*
* @param ctx The scratchpad context for the current state.
* @param netIn The input conditions for the network.
* @return PrimingReport containing information about the priming process.
*
@@ -403,6 +448,7 @@ namespace gridfire::engine {
* from each sub engine.
* @note It is up to each engine to decide how to handle filling in the return composition.
* @note These methods return an unfinalized composition which must then be finalized by the caller
* @param ctx The scratchpad context for the current state.
* @param comp Input composition to "normalize".
* @param T9
* @param rho
@@ -434,5 +480,7 @@ namespace gridfire::engine {
scratch::StateBlob& ctx
) const = 0;
[[nodiscard]] virtual std::unique_ptr<scratch::StateBlob> constructStateBlob(const scratch::StateBlob *blob) const = 0;
};
}

View File

@@ -137,6 +137,18 @@ namespace gridfire::engine {
*/
explicit GraphEngine(const reaction::ReactionSet &reactions);
void addReaction(
const reaction::Reaction& reaction
);
void addReaction(
const std::string& reaction_id
);
std::unique_ptr<scratch::StateBlob> constructStateBlob(
const scratch::StateBlob *blob = nullptr
) const override;
/**
* @brief Calculates the right-hand side (dY/dt) and energy generation rate.
*
@@ -204,6 +216,7 @@ namespace gridfire::engine {
double rho
) const override;
/**
* @brief Calculates the derivatives of the energy generation rate with respect to temperature and density for a subset of reactions
*

View File

@@ -70,6 +70,7 @@ struct AdaptiveEngineViewScratchPad final : AbstractScratchPad {
/// @brief Flag indicating whether the scratchpad has been initialized.
bool has_initialized = false;
/// @brief Vector of species currently active in the adaptive network.
std::vector<fourdst::atomic::Species> active_species;

View File

@@ -103,6 +103,9 @@ struct MultiscalePartitioningEngineViewScratchPad final : AbstractScratchPad {
/// @brief Flag indicating whether the scratchpad has been initialized.
bool has_initialized = false;
/// @breif User configurable parameter to control flux coupling threshold used
double flux_coupling_threshold = 5.0;
/// @brief Vector of QSE groups representing equilibrium clusters.
std::vector<QSEGroup> qse_groups;

View File

@@ -10,7 +10,6 @@
#include "fourdst/config/config.h"
#include "fourdst/logging/logging.h"
#include "gridfire/engine/procedures/construction.h"
#include "gridfire/engine/scratchpads/blob.h"
#include "quill/Logger.h"
@@ -234,6 +233,26 @@ namespace gridfire::engine {
scratch::StateBlob& ctx
) const override;
/**
* @brief Gets the set of inactive logical reactions in the network.
*
* @return ReactionSet containing all inactive reactions.
*
* This method returns the set of reactions that have been culled from the active
* network based on the adaptation criteria.
*/
[[nodiscard]] reaction::ReactionSet getInactiveNetworkReactions(
scratch::StateBlob &ctx
) const override;
[[nodiscard]] double getInactiveReactionMolarReactionFlow(
scratch::StateBlob& ctx,
const reaction::Reaction &reaction,
const fourdst::composition::CompositionAbstract &comp,
double T9,
double rho
) const override;
/**
* @brief Computes timescales for all active species in the network.
*
@@ -319,6 +338,7 @@ namespace gridfire::engine {
/**
* @brief Primes the engine with the given network input.
*
* @param ctx The scratchpad context for storing thread-local data.
* @param netIn The current network input, containing temperature, density, and composition.
* @return A PrimingReport indicating the result of the priming operation.
*
@@ -367,6 +387,8 @@ namespace gridfire::engine {
[[nodiscard]] std::optional<StepDerivatives<double>>getMostRecentRHSCalculation(
scratch::StateBlob &ctx
) const override;
[[nodiscard]] std::unique_ptr<scratch::StateBlob> constructStateBlob(const scratch::StateBlob *blob) const override;
private:
using LogManager = fourdst::logging::LogManager;
@@ -399,7 +421,7 @@ namespace gridfire::engine {
* @param netIn The current network input, containing temperature, density, and composition.
* @return A pair with the first element a vector of ReactionFlow structs, each containing a pointer to a
* reaction and its calculated flow rate and the second being a composition object where species which were not
* present in netIn but are present in the definition of the base engine are registered but have 0 mass fraction
* present in netIn but are present in the definition of the base engine are registered but have 0 mass fraction.
*
* @par Algorithm:
* 1. Iterates through all species in the base engine's network.

View File

@@ -255,6 +255,9 @@ namespace gridfire::engine {
[[nodiscard]] std::optional<StepDerivatives<double>>getMostRecentRHSCalculation(
scratch::StateBlob &ctx
) const override;
[[nodiscard]] std::unique_ptr<scratch::StateBlob> constructStateBlob(const scratch::StateBlob *blob) const override;
protected:
bool m_isStale = true;
GraphEngine& m_baseEngine;
@@ -343,7 +346,6 @@ namespace gridfire::engine {
scratch::StateBlob& ctx,
const std::vector<std::string>& peNames
) const;
};
class FileDefinedEngineView final: public DefinedEngineView {

View File

@@ -611,6 +611,8 @@ namespace gridfire::engine {
[[nodiscard]] std::optional<StepDerivatives<double>>getMostRecentRHSCalculation(
scratch::StateBlob &
) const override;
[[nodiscard]] std::unique_ptr<scratch::StateBlob> constructStateBlob(const scratch::StateBlob *blob) const override;
public:
/**
* @brief Struct representing a QSE group.
@@ -990,9 +992,6 @@ namespace gridfire::engine {
const std::vector<QSEGroup> &groups,
const std::vector<reaction::ReactionSet> &groupReactions
);
public:
};
}

View File

@@ -31,6 +31,7 @@ namespace gridfire::engine {
/**
* @brief Constructs the view by looking up the priming species by symbol.
*
* @param ctx State Blob containing Engine context
* @param primingSymbol Symbol string of the species to prime.
* @param baseEngine Reference to the base DynamicEngine to wrap.
* @pre primingSymbol must correspond to a valid species in atomic::species registry.
@@ -46,6 +47,7 @@ namespace gridfire::engine {
/**
* @brief Constructs the view using an existing Species object.
*
* @param ctx State Blob containing Engine context
* @param primingSpecies The species object to prime.
* @param baseEngine Reference to the base DynamicEngine to wrap.
* @pre primingSpecies must be valid and present in the network of baseEngine.
@@ -66,6 +68,7 @@ namespace gridfire::engine {
/**
* @brief Constructs the set of reaction names that involve the priming species.
*
* @param ctx State blob containing engine context
* @param primingSpecies Species for which to collect priming reactions.
* @param baseEngine Base engine containing the full network of reactions.
* @pre baseEngine.getNetworkReactions() returns a valid iterable set of reactions.

View File

@@ -11,4 +11,4 @@
#include "gridfire/trigger/trigger.h"
#include "gridfire/utils/utils.h"
#include "types/types.h"
#include "gridfire/types/types.h"

View File

@@ -1,3 +1,4 @@
#pragma once
#include "gridfire/io/generative/python.h"
#include "gridfire/io/generative/python.h"
#include "gridfire/io/generative/mesa.h"

View File

@@ -0,0 +1,17 @@
#pragma once
#include "fourdst/atomic/atomicSpecies.h"
#include "gridfire/reaction/reaction.h"
#include "gridfire/engine/engine_abstract.h"
#include <format>
namespace gridfire::io::generative {
std::string get_mesa_iso_name(const fourdst::atomic::Species& species);
bool is_proton(const fourdst::atomic::Species& species);
bool is_alpha(const fourdst::atomic::Species& species);
bool is_neutron(const fourdst::atomic::Species& species);
std::string get_mesa_reaction_name(const reaction::Reaction& reaction);
std::string export_engine_to_mesa_net(const engine::DynamicEngine& engine, engine::scratch::StateBlob& ctx, bool skip_weak);
}

View File

@@ -58,6 +58,8 @@ namespace gridfire::solver {
const size_t currentNonlinearIterations; ///< Total number of non-linear iterations
const std::map<fourdst::atomic::Species, std::unordered_map<std::string, double>>& reactionContributionMap; ///< Map of reaction contributions for the current step
engine::scratch::StateBlob& state_ctx; ///< Reference to the engine scratch state blob
double current_total_energy = 0.0; ///< Current energy generation rate [erg/g/s]
double current_neutrino_energy_loss_rate = 0.0; ///< Current neutrino energy loss rate [erg/g/s]
PointSolverTimestepContext(
double t,
@@ -76,6 +78,8 @@ namespace gridfire::solver {
);
[[nodiscard]] std::vector<std::tuple<std::string, std::string>> describe() const override;
[[nodiscard]] fourdst::composition::Composition getPhysicalComposition() const;
};
using TimestepCallback = std::function<void(const PointSolverTimestepContext& context)>; ///< Type alias for a timestep callback function.
@@ -169,6 +173,13 @@ namespace gridfire::solver {
const engine::DynamicEngine& engine
);
PointSolver(
const engine::DynamicEngine& engine,
const config::GridFireConfig& config
);
config::GridFireConfig getConfig() const { return *m_config; }
/**
* @brief Integrate from t=0 to netIn.tMax and return final composition and energy.
*
@@ -264,6 +275,17 @@ namespace gridfire::solver {
*/
static int cvode_jac_wrapper(sunrealtype t, N_Vector y, N_Vector ydot, SUNMatrix J, void *user_data, N_Vector tmp1, N_Vector tmp2, N_Vector tmp3);
/**
* @brief CVODE error handler that logs errors and warnings from SUNDIALS using the solver's logger.
* @param line
* @param func
* @param file
* @param msg
* @param err_code
* @param err_user_data
* @param sunctx
*/
static void cvode_error_handler(int line, const char *func, const char *file, const char *msg, SUNErrCode err_code, void *err_user_data, SUNContext sunctx);
/**
* @brief Compute RHS into ydot at time t from the engine and current state y.
*

View File

@@ -4,6 +4,7 @@
#include "gridfire/trigger/trigger_result.h"
#include "gridfire/solver/strategies/PointSolver.h"
#include "fourdst/logging/logging.h"
#include "gridfire/config/config.h"
#include <string>
#include <deque>
@@ -316,6 +317,46 @@ namespace gridfire::trigger::solver::CVODE {
bool rel_failure(const gridfire::solver::PointSolverTimestepContext& ctx) const;
};
class BoundaryFluxTrigger final : public Trigger<gridfire::solver::PointSolverTimestepContext> {
public:
explicit BoundaryFluxTrigger(double relativeThreshold, double absoluteThreshold);
bool check(const gridfire::solver::PointSolverTimestepContext &ctx) const override;
void update(const gridfire::solver::PointSolverTimestepContext &ctx) override;
void step(const gridfire::solver::PointSolverTimestepContext &ctx) override;
void reset() override;
std::string name() const override;
TriggerResult why(const gridfire::solver::PointSolverTimestepContext &ctx) const override;
std::string describe() const override;
size_t numTriggers() const override;
size_t numMisses() const override;
private:
enum class ReactionSetType : uint8_t {
ACTIVE,
INACTIVE
};
static double get_reaction_set_flow(
const reaction::ReactionSet& reactions,
const gridfire::solver::PointSolverTimestepContext& ctx,
const fourdst::composition::Composition& comp,
double T9,
double rho,
ReactionSetType type
);
private:
quill::Logger* m_logger = fourdst::logging::LogManager::getInstance().getLogger("log");
mutable size_t m_hits = 0;
mutable size_t m_misses = 0;
mutable size_t m_updates = 0;
mutable size_t m_resets = 0;
double m_relativeThreshold;
double m_absoluteThreshold;
};
/**
* @brief Compose a trigger suitable for deciding engine re-partitioning during CVODE solves.
*
@@ -329,18 +370,9 @@ namespace gridfire::trigger::solver::CVODE {
* See engine_partitioning_trigger.cpp for construction details using OrTrigger and
* EveryNthTrigger from trigger_logical.h.
*
* @param simulationTimeInterval Interval used by SimulationTimeTrigger (> 0).
* @param offDiagonalThreshold Off-diagonal Jacobian magnitude threshold (>= 0).
* @param timestepCollapseRatio Threshold for timestep deviation (>= 0, and <= 1 when relative).
* @param maxConvergenceFailures Window size for timestep averaging (>= 1 recommended).
* @return A unique_ptr to a composed Trigger<TimestepContext> implementing the policy above.
*
* @note The exact policy is subject to change; this function centralizes that decision.
*/
std::unique_ptr<Trigger<gridfire::solver::PointSolverTimestepContext>> makeEnginePartitioningTrigger(
double simulationTimeInterval,
double offDiagonalThreshold,
double timestepCollapseRatio,
size_t maxConvergenceFailures
);
std::unique_ptr<Trigger<gridfire::solver::PointSolverTimestepContext>> makeEnginePartitioningTrigger(const config::TriggerConfig& cfg);
}

View File

@@ -1,14 +1,24 @@
#pragma once
#include <format>
#include <string>
namespace gridfire {
struct version {
static constexpr int major = #STRINGIFY(GF_VERSION_MAJOR);
static constexpr int minor = #STRINGIFY(GF_VERSION_MINOR);
static constexpr int patch = #STRINGIFY(GF_VERSION_PATCH);
static constexpr int major = @GF_VERSION_MAJOR@;
static constexpr int minor = @GF_VERSION_MINOR@;
static constexpr int patch = @GF_VERSION_PATCH@;
static constexpr const char* tag = "@GF_VERSION_TAG@";
static constexpr const char* tag = #STRINGIFY(GF_VERSION_TAG);
static std::string toString() {
std::string versionStr = std::to_string(major) + "." +
std::to_string(minor) + "." +
std::to_string(patch);
if (std::string(tag) != "") {
versionStr += "-" + std::string(tag);
}
return versionStr;
}
};
}

View File

@@ -21,7 +21,7 @@ namespace gridfire::omp {
if (s_par_mode_initialized) {
return; // Only initialize once
}
quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
[[maybe_unused]] quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
LOG_INFO(logger, "Initializing OpenMP parallel mode with {} threads", static_cast<unsigned long>(omp_get_max_threads()));
CppAD::thread_alloc::parallel_setup(
static_cast<size_t>(omp_get_max_threads()), // Max threads
@@ -41,7 +41,7 @@ namespace gridfire::omp {
namespace gridfire::omp {
inline void log_not_in_parallel_mode() {
quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
[[maybe_unused]] quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
LOG_INFO(logger, "This is not an error! Note: OpenMP parallel mode is not enabled. GF_USE_OPENMP is not defined. Pass -DGF_USE_OPENMP when compiling to enable OpenMP support. When using meson use the option -Dopenmp_support=true");
}
}

View File

@@ -0,0 +1,41 @@
python_exe = import('python').find_installation('python3')
version_parser = '''
import sys, re
ver = sys.argv[1]
if ver.startswith("v"): ver = ver[1:]
m = re.match(r"^(\d+)\.(\d+)\.(\d+)(.*)$", ver)
if m:
print(f"{m.group(1)};{m.group(2)};{m.group(3)};{m.group(4)}")
else:
print("0;0;0;unknown")
'''
ver_res = run_command(python_exe, '-c', version_parser, meson.project_version(), check: true)
ver_parts = ver_res.stdout().strip().split(';')
conf_data = configuration_data()
conf_data.set('GF_VERSION_MAJOR', ver_parts[0])
conf_data.set('GF_VERSION_MINOR', ver_parts[1])
conf_data.set('GF_VERSION_PATCH', ver_parts[2])
conf_data.set('GF_VERSION_TAG', ver_parts[3])
message('Configuring include/utils/config.h with version ' + meson.project_version())
message(' Major: ' + ver_parts[0])
message(' Minor: ' + ver_parts[1])
message(' Patch: ' + ver_parts[2])
message(' Tag: ' + ver_parts[3])
do_install_version_file = true
if get_option('build_python')
message('Not installing version file since we are building the Python extension. The version information will be included in the Python module instead.')
do_install_version_file = false
endif
configure_file(
input : 'config.h.in',
output : 'config.h',
configuration : conf_data,
install : do_install_version_file,
install_dir : get_option('includedir') / 'gridfire/utils'
)

View File

@@ -32,7 +32,8 @@
#include "cppad/cppad.hpp"
#include "cppad/utility/sparse_rc.hpp"
#include "cppad/utility/sparse_rcv.hpp"
#include "fourdst/composition/exceptions/exceptions_composition.h"
#include "gridfire/reaction/reaclib.h"
namespace {
@@ -132,6 +133,36 @@ namespace gridfire::engine {
syncInternalMaps();
}
void GraphEngine::addReaction(
const reaction::Reaction& reaction
) {
m_reactions.add_reaction(reaction);
syncInternalMaps();
}
void GraphEngine::addReaction(
const std::string& reaction_id
) {
const auto& allReaclibReactions = reaclib::get_all_reaclib_reactions();
const auto& reaction = allReaclibReactions.get(reaction_id);
if (reaction.has_value()) {
m_reactions.add_reaction(reaction.value()->clone());
} else {
throw exceptions::BadCollectionError(std::format("Unable to locate reaction with ID {} in reaclib set", reaction_id));
}
}
std::unique_ptr<scratch::StateBlob> GraphEngine::constructStateBlob(const scratch::StateBlob *blob) const {
if (blob) {
throw exceptions::ScratchPadError("GraphEngine does not support accepting an external StateBlob. The state blob for GraphEngine must be constructed internally to ensure it contains the correct scratchpad states.");
}
auto i_blob = std::make_unique<scratch::StateBlob>();
i_blob->enroll<engine::scratch::GraphEngineScratchPad>();
auto* state = scratch::get_state<scratch::GraphEngineScratchPad, false>(*i_blob);
state->initialize(*this);
return i_blob;
}
std::expected<StepDerivatives<double>, EngineStatus> GraphEngine::calculateRHSAndEnergy(
scratch::StateBlob& ctx,
const fourdst::composition::CompositionAbstract &comp,
@@ -761,7 +792,14 @@ namespace gridfire::engine {
for (const auto& species : m_networkSpecies ) {
result.registerSpecies(species);
if (comp.contains(species)) {
result.setMolarAbundance(species, comp.getMolarAbundance(species));
double Y = comp.getMolarAbundance(species);
if (Y < 0.0 && std::abs(Y) <= 1e-16) {
result.setMolarAbundance(species, 0.0);
} else if (Y < 0.0 && std::abs(Y) >= 1e-16) {
throw fourdst::composition::exceptions::InvalidCompositionError(std::format("Molar abundance for species {} is negative (Y = {}). GraphEngine does not support non-physical negative abundances, even if they are very small in magnitude (clamp is 1e-16). Check input composition for validity.", species.name(), Y));
} else {
result.setMolarAbundance(species, Y);
}
}
}
return result;
@@ -997,7 +1035,7 @@ namespace gridfire::engine {
for (const auto& species: m_networkSpecies) {
double Yi = 0.0; // Small floor to avoid issues with zero abundances
if (comp.contains(species)) {
Yi = comp.getMolarAbundance(species);
Yi = std::max(comp.getMolarAbundance(species), 1e-30);
}
x[i] = Yi;
i++;

View File

@@ -166,6 +166,35 @@ namespace gridfire::engine {
return scratch::get_state<scratch::AdaptiveEngineViewScratchPad, true>(ctx) -> active_reactions;
}
reaction::ReactionSet AdaptiveEngineView::getInactiveNetworkReactions(scratch::StateBlob &ctx) const {
const reaction::ReactionSet& baseEngineReactions = m_baseEngine.getNetworkReactions(ctx);
const reaction::ReactionSet baseEngineInactiveReactions = m_baseEngine.getInactiveNetworkReactions(ctx);
reaction::ReactionSet inactiveReactions = baseEngineInactiveReactions;
const auto* state = scratch::get_state<scratch::AdaptiveEngineViewScratchPad, true>(ctx);
const reaction::ReactionSet& activeReactions = state->active_reactions;
for (const auto& active_reaction : baseEngineReactions) {
if (!inactiveReactions.contains(*active_reaction) && !activeReactions.contains(*active_reaction)) {
inactiveReactions.add_reaction(*active_reaction);
}
}
return inactiveReactions;
}
double AdaptiveEngineView::getInactiveReactionMolarReactionFlow(
scratch::StateBlob &ctx,
const reaction::Reaction &reaction,
const fourdst::composition::CompositionAbstract &comp,
const double T9,
const double rho
) const {
return m_baseEngine.calculateMolarReactionFlow(ctx, reaction, comp, T9, rho);
}
std::expected<std::unordered_map<Species, double>, EngineStatus> AdaptiveEngineView::getSpeciesTimescales(
scratch::StateBlob& ctx,
const fourdst::composition::CompositionAbstract &comp,
@@ -268,6 +297,19 @@ namespace gridfire::engine {
return m_baseEngine.getMostRecentRHSCalculation(ctx);
}
std::unique_ptr<scratch::StateBlob> AdaptiveEngineView::constructStateBlob(const scratch::StateBlob *blob) const {
std::unique_ptr<scratch::StateBlob> i_blob;
if (blob) {
i_blob = blob->clone_structure();
} else {
i_blob = std::make_unique<scratch::StateBlob>();
}
i_blob->enroll<scratch::AdaptiveEngineViewScratchPad>();
auto* state = scratch::get_state<scratch::AdaptiveEngineViewScratchPad, false>(*i_blob);
state->initialize(*this);
return i_blob;
}
size_t AdaptiveEngineView::getSpeciesIndex(
scratch::StateBlob& ctx,
const Species &species

View File

@@ -267,6 +267,10 @@ namespace gridfire::engine {
return m_baseEngine.getMostRecentRHSCalculation(ctx);
}
std::unique_ptr<scratch::StateBlob> DefinedEngineView::constructStateBlob(const scratch::StateBlob *blob) const {
throw exceptions::ScratchPadError("DefinedEngineView does not support StateBlob construction. This will be implemented in a future version.");
}
std::vector<size_t> DefinedEngineView::constructSpeciesIndexMap(
scratch::StateBlob& ctx
) const {

View File

@@ -30,6 +30,7 @@
#include "sunlinsol/sunlinsol_dense.h"
#include "xxhash64.h"
#include "fourdst/composition/exceptions/exceptions_composition.h"
#include "fourdst/composition/utils/composition_hash.h"
namespace {
@@ -1123,6 +1124,19 @@ namespace gridfire::engine {
return m_baseEngine.getMostRecentRHSCalculation(ctx);
}
std::unique_ptr<scratch::StateBlob> MultiscalePartitioningEngineView::constructStateBlob(const scratch::StateBlob *blob) const {
std::unique_ptr<scratch::StateBlob> i_blob;
if (blob) {
i_blob = blob->clone_structure();
} else {
i_blob = std::make_unique<scratch::StateBlob>();
}
i_blob->enroll<scratch::MultiscalePartitioningEngineViewScratchPad>();
auto* state = scratch::get_state<scratch::MultiscalePartitioningEngineViewScratchPad, false>(*i_blob);
state->initialize();
return i_blob;
}
size_t MultiscalePartitioningEngineView::getSpeciesIndex(
scratch::StateBlob& ctx,
const Species &species
@@ -1323,7 +1337,8 @@ namespace gridfire::engine {
const double rho,
const QSEGroup &group
) const {
constexpr double FLUX_RATIO_THRESHOLD = 5;
auto* state = scratch::get_state<scratch::MultiscalePartitioningEngineViewScratchPad, true>(ctx);
double FLUX_RATIO_THRESHOLD = state->flux_coupling_threshold;
const std::unordered_set<Species> algebraic_group_members(
group.algebraic_species.begin(),
@@ -1470,8 +1485,8 @@ namespace gridfire::engine {
const double diff_total = std::abs(total_prod - total_dest);
bool total_balanced = (mean_total > 0) && ((diff_total / mean_total) < 0.05);
// Check 2: Charged-Particle Balance (The "Neutron-Exclusion" Check)
// Only valid if there IS charged flow (avoid 0/0 success)
// Check 2: Charged-Particle Balance
// Only valid if there IS charged flow
const double mean_charged = (charged_prod + charged_dest) / 2.0;
const double diff_charged = std::abs(charged_prod - charged_dest);
bool charged_balanced = (mean_charged > 0) && ((diff_charged / mean_charged) < 0.05);
@@ -1549,8 +1564,10 @@ namespace gridfire::engine {
m_logger->flush_log();
throw exceptions::EngineError("Non-finite abundance computed for species " + std::string(sp.name()) + " in QSE group solve.");
}
if (y < 0.0 && std::abs(y) < 1e-20) {
if (y < 0.0 && std::abs(y) < 1e-16) {
abundances.push_back(0.0);
} else if (y < 0 && std::abs(y) >= 1e-16) {
throw fourdst::composition::exceptions::InvalidCompositionError(std::format("Computed negative and non-trivial abundance {} for species {} in QSE group solve at T9 = {}, rho = {}. This likely indicates a failure of the QSE solver to converge to a physical solution.", y, sp.name(), T9, rho));
} else {
abundances.push_back(y);
}

View File

@@ -0,0 +1,155 @@
#include "gridfire/io/generative/mesa.h"
#include "gridfire/engine/engine_abstract.h"
#include "gridfire/reaction/reaction.h"
#include "fourdst/atomic/atomicSpecies.h"
#include "gridfire/utils/config.h"
#include <sstream>
#include <string>
#include <vector>
#include <algorithm>
#include <chrono>
#include <cctype>
namespace gridfire::io::generative {
std::string get_mesa_iso_name(const fourdst::atomic::Species& species) {
auto name = std::string(species.name());
std::ranges::transform(name, name.begin(), ::tolower);
name.erase(std::ranges::remove(name, '-').begin(), name.end());
if (name == "p") return "h1";
if (name == "n" || name == "n1") return "neut";
if (name == "d") return "h2";
if (name == "t") return "h3";
if (name == "a") return "he4";
return name;
}
bool is_proton(const fourdst::atomic::Species& s) { return get_mesa_iso_name(s) == "h1"; }
bool is_alpha(const fourdst::atomic::Species& s) { return get_mesa_iso_name(s) == "he4"; }
bool is_neutron(const fourdst::atomic::Species& s) { return get_mesa_iso_name(s) == "neut"; }
std::string get_mesa_reaction_name(const reaction::Reaction& reaction) {
std::vector<fourdst::atomic::Species> react_sorted = reaction.reactants();
std::vector<fourdst::atomic::Species> prod_sorted = reaction.products();
auto sort_species = [](std::vector<fourdst::atomic::Species>& list) {
std::ranges::sort(list, [](const auto& a, const auto& b) {
if (a.z() != b.z()) return a.z() < b.z();
return a.a() < b.a();
});
};
sort_species(react_sorted);
sort_species(prod_sorted);
if (react_sorted.size() == 1 && prod_sorted.size() == 1) {
if (reaction.type() == reaction::ReactionType::WEAK ||
reaction.type() == reaction::ReactionType::REACLIB_WEAK ||
reaction.type() == reaction::ReactionType::LOGICAL_REACLIB_WEAK) {
return "r_" + get_mesa_iso_name(react_sorted[0]) + "_wk_" + get_mesa_iso_name(prod_sorted[0]);
}
}
if (react_sorted.size() == 2 && prod_sorted.size() == 1) {
std::string x, cap;
if (is_proton(react_sorted[0]) || is_proton(react_sorted[1])) {
cap = "pg";
x = is_proton(react_sorted[0]) ? get_mesa_iso_name(react_sorted[1]) : get_mesa_iso_name(react_sorted[0]);
}
else if (is_alpha(react_sorted[0]) || is_alpha(react_sorted[1])) {
cap = "ag";
x = is_alpha(react_sorted[0]) ? get_mesa_iso_name(react_sorted[1]) : get_mesa_iso_name(react_sorted[0]);
}
else if (is_neutron(react_sorted[0]) || is_neutron(react_sorted[1])) {
cap = "ng";
x = is_neutron(react_sorted[0]) ? get_mesa_iso_name(react_sorted[1]) : get_mesa_iso_name(react_sorted[0]);
}
if (!cap.empty()) return "r_" + x + "_" + cap + "_" + get_mesa_iso_name(prod_sorted[0]);
}
if (react_sorted.size() == 1 && prod_sorted.size() == 2) {
std::string x, em;
if (is_proton(prod_sorted[0]) || is_proton(prod_sorted[1])) {
em = "gp";
x = is_proton(prod_sorted[0]) ? get_mesa_iso_name(prod_sorted[1]) : get_mesa_iso_name(prod_sorted[0]);
}
else if (is_alpha(prod_sorted[0]) || is_alpha(prod_sorted[1])) {
em = "ga";
x = is_alpha(prod_sorted[0]) ? get_mesa_iso_name(prod_sorted[1]) : get_mesa_iso_name(prod_sorted[0]);
}
else if (is_neutron(prod_sorted[0]) || is_neutron(prod_sorted[1])) {
em = "gn";
x = is_neutron(prod_sorted[0]) ? get_mesa_iso_name(prod_sorted[1]) : get_mesa_iso_name(prod_sorted[0]);
}
if (!em.empty()) return "r_" + get_mesa_iso_name(react_sorted[0]) + "_" + em + "_" + x;
}
if (react_sorted.size() == 2 && prod_sorted.size() == 2) {
int r_p = -1, r_a = -1, r_n = -1;
int p_p = -1, p_a = -1, p_n = -1;
for(int i=0; i<2; ++i) {
if(is_proton(react_sorted[i])) r_p = i;
if(is_alpha(react_sorted[i])) r_a = i;
if(is_neutron(react_sorted[i])) r_n = i;
if(is_proton(prod_sorted[i])) p_p = i;
if(is_alpha(prod_sorted[i])) p_a = i;
if(is_neutron(prod_sorted[i])) p_n = i;
}
std::string x, y, exc;
if (r_a != -1 && p_p != -1) { exc = "ap"; x = get_mesa_iso_name(react_sorted[1-r_a]); y = get_mesa_iso_name(prod_sorted[1-p_p]); }
else if (r_p != -1 && p_a != -1) { exc = "pa"; x = get_mesa_iso_name(react_sorted[1-r_p]); y = get_mesa_iso_name(prod_sorted[1-p_a]); }
else if (r_n != -1 && p_p != -1) { exc = "np"; x = get_mesa_iso_name(react_sorted[1-r_n]); y = get_mesa_iso_name(prod_sorted[1-p_p]); }
else if (r_p != -1 && p_n != -1) { exc = "pn"; x = get_mesa_iso_name(react_sorted[1-r_p]); y = get_mesa_iso_name(prod_sorted[1-p_n]); }
else if (r_n != -1 && p_a != -1) { exc = "na"; x = get_mesa_iso_name(react_sorted[1-r_n]); y = get_mesa_iso_name(prod_sorted[1-p_a]); }
else if (r_a != -1 && p_n != -1) { exc = "an"; x = get_mesa_iso_name(react_sorted[1-r_a]); y = get_mesa_iso_name(prod_sorted[1-p_n]); }
if (!exc.empty()) return "r_" + x + "_" + exc + "_" + y;
}
std::string fallback = "r";
for (const auto& s : react_sorted) fallback += "_" + get_mesa_iso_name(s);
fallback += "_to";
for (const auto& s : prod_sorted) fallback += "_" + get_mesa_iso_name(s);
return fallback;
}
std::string export_engine_to_mesa_net(const engine::DynamicEngine& engine, engine::scratch::StateBlob& ctx, bool skip_weak) {
std::stringstream ss;
ss << "! Auto-generated MESA .net file from GridFire\n";
ss << "! Generated by GridFire version: " << version().toString() << "\n";
ss << "! Generated on " << std::chrono::system_clock::to_time_t(std::chrono::system_clock::now()) << "\n\n";
ss << "add_isos(\n";
for (const auto& species : engine.getNetworkSpecies(ctx)) {
ss << " " << get_mesa_iso_name(species) << "\n";
}
ss << ")\n\n";
ss << "add_reactions(\n";
const auto& reactions = engine.getNetworkReactions(ctx);
for (const auto& reaction_ptr : reactions) {
if (skip_weak && (reaction_ptr->type() == reaction::ReactionType::WEAK ||
reaction_ptr->type() == reaction::ReactionType::REACLIB_WEAK ||
reaction_ptr->type() == reaction::ReactionType::LOGICAL_REACLIB_WEAK)) {
continue;
}
ss << " " << get_mesa_reaction_name(*reaction_ptr) << "\n";
}
ss << ")\n";
return ss.str();
}
}

View File

@@ -61,14 +61,7 @@ namespace gridfire::policy {
std::make_unique<engine::GraphEngine>(m_initializing_composition, *m_partition_function, engine::NetworkBuildDepth::ThirdOrder, engine::NetworkConstructionFlags::DEFAULT)
);
m_network_stack.emplace_back(
std::make_unique<engine::MultiscalePartitioningEngineView>(*m_network_stack.back().get())
);
m_network_stack.emplace_back(
std::make_unique<engine::AdaptiveEngineView>(*m_network_stack.back().get())
);
std::unique_ptr<engine::scratch::StateBlob> scratch_blob = get_stack_scratch_blob();
std::unique_ptr<engine::scratch::StateBlob> scratch_blob = m_network_stack.back()->constructStateBlob(nullptr);
m_status = NetworkPolicyStatus::INITIALIZED_UNVERIFIED;
m_status = check_status(*scratch_blob);
@@ -110,8 +103,6 @@ namespace gridfire::policy {
std::vector<engine::EngineTypes> MainSequencePolicy::get_engine_types_stack() const {
return {
engine::EngineTypes::GRAPH_ENGINE,
engine::EngineTypes::MULTISCALE_PARTITIONING_ENGINE_VIEW,
engine::EngineTypes::ADAPTIVE_ENGINE_VIEW
};
}
@@ -125,32 +116,14 @@ namespace gridfire::policy {
}
auto blob = std::make_unique<engine::scratch::StateBlob>();
blob->enroll<engine::scratch::GraphEngineScratchPad>();
blob->enroll<engine::scratch::AdaptiveEngineViewScratchPad>();
blob->enroll<engine::scratch::MultiscalePartitioningEngineViewScratchPad>();
const engine::GraphEngine* graph_engine = dynamic_cast<engine::GraphEngine*>(m_network_stack.front().get());
if (!graph_engine) {
throw exceptions::PolicyError("Cannot get stack scratch blob from MainSequencePolicy: The base engine is not a GraphEngine. This indicates a serious internal inconsistency and should be reported to the GridFire developers, thank you.");
}
const engine::MultiscalePartitioningEngineView* multiscale_engine = dynamic_cast<engine::MultiscalePartitioningEngineView*>(m_network_stack[1].get());
if (!multiscale_engine) {
throw exceptions::PolicyError("Cannot get stack scratch blob from MainSequencePolicy: The middle engine is not a MultiscalePartitioningEngineView. This indicates a serious internal inconsistency and should be reported to the GridFire developers, thank you.");
}
const engine::AdaptiveEngineView* adaptive_engine = dynamic_cast<engine::AdaptiveEngineView*>(m_network_stack.back().get());
if (!adaptive_engine) {
throw exceptions::PolicyError("Cannot get stack scratch blob from MainSequencePolicy: The top engine is not an AdaptiveEngineView. This indicates a serious internal inconsistency and should be reported to the GridFire developers, thank you.");
}
auto* graph_engine_state = engine::scratch::get_state<engine::scratch::GraphEngineScratchPad, false>(*blob);
graph_engine_state->initialize(*graph_engine);
auto* multiscale_engine_state = engine::scratch::get_state<engine::scratch::MultiscalePartitioningEngineViewScratchPad, false>(*blob);
multiscale_engine_state->initialize();
auto* adaptive_engine_state = engine::scratch::get_state<engine::scratch::AdaptiveEngineViewScratchPad, false>(*blob);
adaptive_engine_state->initialize(*adaptive_engine);
return blob;
}

View File

@@ -23,6 +23,7 @@
#include "gridfire/trigger/procedures/trigger_pprint.h"
#include "gridfire/exceptions/error_solver.h"
#include "gridfire/utils/sundials.h"
#include "gridfire/config/config.h"
namespace gridfire::solver {
@@ -74,6 +75,19 @@ namespace gridfire::solver {
return description;
}
fourdst::composition::Composition PointSolverTimestepContext::getPhysicalComposition() const {
sunrealtype* y_data = N_VGetArrayPointer(state);
std::vector<double> y_vec(y_data, y_data + networkSpecies.size());
for (int i = 0; i < y_vec.size(); i++) {
if (y_vec[i] < 0 && std::abs(y_vec[i]) <= 1e-16) {
y_vec[i] = 0.0; // clamp to 0 to avoid small numerical noise issues
}
}
const fourdst::composition::Composition base_comp(networkSpecies, y_vec);
return engine.collectComposition(state_ctx, base_comp, T9, rho);
}
void PointSolverContext::init() {
reset_all();
init_context();
@@ -165,6 +179,15 @@ namespace gridfire::solver {
const DynamicEngine &engine
): SingleZoneNetworkSolver(engine) {}
PointSolver::PointSolver(
const engine::DynamicEngine &engine,
const config::GridFireConfig &config
) : SingleZoneNetworkSolver(engine) {
m_config.mutate([&config](auto& cfg) {
cfg = config;
});
}
NetOut PointSolver::evaluate(
SolverContextBase& solver_ctx,
const NetIn& netIn
@@ -179,10 +202,13 @@ namespace gridfire::solver {
bool forceReinitialize
) const {
auto* sctx_p = dynamic_cast<PointSolverContext*>(&solver_ctx);
if (sctx_p == nullptr) {
throw exceptions::SolverError("Provided solver context is not of type PointSolverContext");
}
LOG_TRACE_L1(m_logger, "Starting solver evaluation with T9: {} and rho: {}", netIn.temperature/1e9, netIn.density);
LOG_TRACE_L1(m_logger, "Building engine update trigger....");
auto trigger = trigger::solver::CVODE::makeEnginePartitioningTrigger(1e12, 1e10, 0.5, 2);
auto trigger = trigger::solver::CVODE::makeEnginePartitioningTrigger(m_config->solver.pointSolver.trigger);
LOG_TRACE_L1(m_logger, "Engine update trigger built!");
@@ -194,10 +220,10 @@ namespace gridfire::solver {
// 3. If the user has not set tolerances in code and the config does not have them, use hardcoded defaults
if (!sctx_p->abs_tol.has_value()) {
sctx_p->abs_tol = m_config->solver.cvode.absTol;
sctx_p->abs_tol = m_config->solver.pointSolver.absTol;
}
if (!sctx_p->rel_tol.has_value()) {
sctx_p->rel_tol = m_config->solver.cvode.relTol;
sctx_p->rel_tol = m_config->solver.pointSolver.relTol;
}
@@ -369,6 +395,8 @@ namespace gridfire::solver {
rcMap,
*sctx_p->engine_ctx
);
ctx.current_total_energy = current_energy;
ctx.current_neutrino_energy_loss_rate = accumulated_neutrino_energy_loss;
prev_nonlinear_iterations = nliters + total_nonlinear_iterations;
prev_convergence_failures = nlcfails + total_convergence_failures;
@@ -395,7 +423,7 @@ namespace gridfire::solver {
trigger::printWhy(trigger->why(ctx));
}
trigger->update(ctx);
accumulated_energy += current_energy; // Add the specific energy rate to the accumulated energy
accumulated_energy = current_energy; // Add the specific energy rate to the accumulated energy
total_nonlinear_iterations += nliters;
total_convergence_failures += nlcfails;
total_steps += n_steps;
@@ -569,7 +597,7 @@ namespace gridfire::solver {
LOG_INFO(m_logger, "CVODE iteration complete");
sunrealtype* y_data = N_VGetArrayPointer(sctx_p->Y);
accumulated_energy += y_data[numSpecies];
accumulated_energy = y_data[numSpecies];
std::vector<double> y_vec(y_data, y_data + numSpecies);
for (double & i : y_vec) {
@@ -789,6 +817,16 @@ namespace gridfire::solver {
return 0;
}
void PointSolver::cvode_error_handler(int line, const char *func, const char *file, const char *msg, SUNErrCode err_code, void *err_user_data, SUNContext sunctx) {
auto* logger = static_cast<quill::Logger*>(err_user_data);
if (!logger) return;
if (err_code < 0) {
LOG_ERROR(logger, "[SUNDIALS ERROR] {} at {}:{}: {}", func, file, line, msg);
} else {
LOG_WARNING(logger, "[SUNDIALS WARNING] {} at {}:{}: {}", func, file, line, msg);
}
}
PointSolver::CVODERHSOutputData PointSolver::calculate_rhs(
const sunrealtype t,
N_Vector y,
@@ -863,9 +901,12 @@ namespace gridfire::solver {
sctx_p->cvode_mem = CVodeCreate(CV_BDF, sctx_p->sun_ctx);
utils::check_cvode_flag(sctx_p->cvode_mem == nullptr ? -1 : 0, "CVodeCreate");
sctx_p->Y = utils::init_sun_vector(N, sctx_p->sun_ctx);
sctx_p->YErr = N_VClone(sctx_p->Y);
SUNContext_PushErrHandler(sctx_p->sun_ctx, cvode_error_handler, m_logger);
sunrealtype *y_data = N_VGetArrayPointer(sctx_p->Y);
for (size_t i = 0; i < numSpecies; i++) {
const auto& species = m_engine.getNetworkSpecies(*sctx_p->engine_ctx)[i];
@@ -880,6 +921,7 @@ namespace gridfire::solver {
utils::check_cvode_flag(CVodeInit(sctx_p->cvode_mem, cvode_rhs_wrapper, current_time, sctx_p->Y), "CVodeInit");
utils::check_cvode_flag(CVodeSStolerances(sctx_p->cvode_mem, relTol, absTol), "CVodeSStolerances");
utils::check_cvode_flag(CVodeSetInitStep(sctx_p->cvode_mem, 1.0e-8), "CVodeSetInitStep");
// Constraints
// We constrain the solution vector using CVODE's built in constraint flags as outlines on page 53 of the CVODE manual
@@ -1003,10 +1045,10 @@ namespace gridfire::solver {
std::vector<double> E_full(y_err_data, y_err_data + num_components - 1);
if (!sctx_p->abs_tol.has_value()) {
sctx_p->abs_tol = m_config->solver.cvode.absTol;
sctx_p->abs_tol = m_config->solver.pointSolver.absTol;
}
if (!sctx_p->rel_tol.has_value()) {
sctx_p->rel_tol = m_config->solver.cvode.relTol;
sctx_p->rel_tol = m_config->solver.pointSolver.relTol;
}
auto result = diagnostics::report_limiting_species(ctx, *user_data.engine, Y_full, E_full, sctx_p->rel_tol.value(), sctx_p->abs_tol.value(), 10, to_file);

View File

@@ -4,12 +4,16 @@
#include "gridfire/trigger/trigger_logical.h"
#include "gridfire/trigger/trigger_abstract.h"
#include "sundials/sundials_nvector.h"
#include "quill/LogMacros.h"
#include <memory>
#include <deque>
#include <string>
#include "gridfire/utils/utils.h"
namespace {
template <typename T>
void push_to_fixed_deque(std::deque<T>& dq, T value, size_t max_size) {
@@ -369,23 +373,195 @@ namespace gridfire::trigger::solver::CVODE {
return false;
}
BoundaryFluxTrigger::BoundaryFluxTrigger(
const double relativeThreshold,
const double absoluteThreshold
) :
m_relativeThreshold(relativeThreshold),
m_absoluteThreshold(absoluteThreshold) {
if (m_relativeThreshold <= 0.0) {
throw exceptions::GridFireError(std::format("Relative threshold must be positive and non zero, currently it is {}", m_relativeThreshold));
}
}
void BoundaryFluxTrigger::step(const gridfire::solver::PointSolverTimestepContext &ctx) {
// Does nothing; not a stateful trigger
}
bool BoundaryFluxTrigger::check(const gridfire::solver::PointSolverTimestepContext &ctx) const {
// First get the current total flow through all active reactions
sunrealtype* y_data = N_VGetArrayPointer(ctx.state);
std::vector<double> Y(y_data, y_data + ctx.networkSpecies.size());
// Adjust any tiny negative abundances to zero using std::ranges
std::ranges::transform(
Y,
Y.begin(),
[](const double y) {
if (y < 0 && y > -1e-16) {
return 0.0;
}
return y;
}
);
const fourdst::composition::Composition comp(ctx.networkSpecies, Y);
const double net_active_flow = get_reaction_set_flow(
ctx.engine.getNetworkReactions(ctx.state_ctx),
ctx,
comp,
ctx.T9,
ctx.rho,
ReactionSetType::ACTIVE
);
const reaction::ReactionSet inactiveReactions = ctx.engine.getInactiveNetworkReactions(ctx.state_ctx);
if (inactiveReactions.empty()) {
m_misses++;
return false; // No inactive reactions to consider
}
const double net_boundary_flow = get_reaction_set_flow(
inactiveReactions,
ctx,
comp,
ctx.T9,
ctx.rho,
ReactionSetType::INACTIVE
);
if (net_boundary_flow > m_absoluteThreshold) {
m_hits++;
return true;
}
const double relative_boundary_flow = net_boundary_flow / (net_active_flow + 1e-300); // Avoid division by zero
if (relative_boundary_flow >= m_relativeThreshold) {
m_hits++;
return true;
}
m_misses++;
return false;
}
void BoundaryFluxTrigger::update(const gridfire::solver::PointSolverTimestepContext &ctx) {
// No-op since this is a stateless trigger
m_updates++;
}
void BoundaryFluxTrigger::reset() {
m_hits = 0;
m_misses = 0;
m_updates = 0;
m_resets++;
}
std::string BoundaryFluxTrigger::name() const {
return "BoundaryFluxTrigger";
}
std::string BoundaryFluxTrigger::describe() const {
return std::format("BoundaryFluxTrigger(rel={}, abs={})", m_relativeThreshold, m_absoluteThreshold);
}
TriggerResult BoundaryFluxTrigger::why(const gridfire::solver::PointSolverTimestepContext &ctx) const {
sunrealtype* y_data = N_VGetArrayPointer(ctx.state);
const std::vector<double> Y(y_data, y_data + ctx.networkSpecies.size());
const fourdst::composition::Composition comp(ctx.networkSpecies, Y);
const double net_active_flow = get_reaction_set_flow(
ctx.engine.getNetworkReactions(ctx.state_ctx),
ctx,
comp,
ctx.T9,
ctx.rho,
ReactionSetType::ACTIVE
);
const reaction::ReactionSet inactiveReactions = ctx.engine.getInactiveNetworkReactions(ctx.state_ctx);
const double net_boundary_flow = get_reaction_set_flow(
inactiveReactions,
ctx,
comp,
ctx.T9,
ctx.rho,
ReactionSetType::INACTIVE
);
TriggerResult result;
result.name = name();
if (check(ctx)) {
result.value = true;
result.description = std::format(
"Triggered because boundary flux ({} mol/s) exceeded thresholds: absolute threshold = {} mol/s, relative threshold = {} (boundary flow = {} mol/s, active flow = {} mol/s)",
net_boundary_flow,
m_absoluteThreshold,
m_relativeThreshold,
net_boundary_flow,
net_active_flow
);
} else {
result.value = false;
result.description = std::format(
"Not triggered because boundary flux ({} mol/g/s) did not exceed thresholds: absolute threshold = {} mol/g/s, relative threshold = {} (boundary flow = {} mol/g/s, active flow = {} mol/g/s)",
net_boundary_flow,
m_absoluteThreshold,
m_relativeThreshold,
net_boundary_flow,
net_active_flow
);
}
return result;
}
size_t BoundaryFluxTrigger::numMisses() const {
return m_misses;
}
double BoundaryFluxTrigger::get_reaction_set_flow(
const reaction::ReactionSet &reactions,
const gridfire::solver::PointSolverTimestepContext &ctx,
const fourdst::composition::Composition &comp,
const double T9,
const double rho,
const ReactionSetType type
) {
double flow = 0.0;
for (const auto& reaction: reactions) {
double rFlow = 0.0;
if (type == ReactionSetType::ACTIVE) {
rFlow = ctx.engine.calculateMolarReactionFlow(ctx.state_ctx, *reaction, comp, T9, rho);
} else {
rFlow = ctx.engine.getInactiveReactionMolarReactionFlow(ctx.state_ctx, *reaction, comp, T9, rho);
}
flow += std::abs(rFlow);
}
return flow;
}
size_t BoundaryFluxTrigger::numTriggers() const {
return m_hits;
}
std::unique_ptr<Trigger<gridfire::solver::PointSolverTimestepContext>> makeEnginePartitioningTrigger(
const double simulationTimeInterval,
const double offDiagonalThreshold,
const double timestepCollapseRatio,
const size_t maxConvergenceFailures
const config::TriggerConfig& cfg
) {
using ctx_t = gridfire::solver::PointSolverTimestepContext;
// 1. INSTABILITY TRIGGERS (High Priority)
// 1. INSTABILITY TRIGGERS
auto convergenceFailureTrigger = std::make_unique<ConvergenceFailureTrigger>(
maxConvergenceFailures,
cfg.maxConvergenceFailures,
1.0f,
10
);
auto timestepCollapseTrigger = std::make_unique<TimestepCollapseTrigger>(
timestepCollapseRatio,
cfg.timestepCollapseRatio,
true, // relative
5
);
@@ -396,12 +572,24 @@ namespace gridfire::trigger::solver::CVODE {
);
// 2. MAINTENANCE TRIGGERS
auto offDiagTrigger = std::make_unique<OffDiagonalTrigger>(offDiagonalThreshold);
auto offDiagTrigger = std::make_unique<OffDiagonalTrigger>(cfg.offDiagonalThreshold);
// 3. PREDICTIVE TRIGGERS
auto boundaryFluxTrigger = std::make_unique<BoundaryFluxTrigger>(
cfg.boundaryFlux.relativeThreshold,
cfg.boundaryFlux.absoluteThreshold
);
// Combine boundary flux into off-diagonal trigger
auto nonInstabilityGroup = std::make_unique<OrTrigger<ctx_t>>(
std::move(offDiagTrigger),
std::move(boundaryFluxTrigger)
);
// Combine: (Instability) OR (Structure Change)
return std::make_unique<OrTrigger<ctx_t>>(
std::move(instabilityGroup),
std::move(offDiagTrigger)
std::move(nonInstabilityGroup)
);
}

View File

@@ -17,6 +17,7 @@ gridfire_sources = files(
'lib/reaction/weak/weak_interpolator.cpp',
'lib/io/network_file.cpp',
'lib/io/generative/python.cpp',
'lib/io/generative/mesa.cpp',
'lib/solver/strategies/PointSolver.cpp',
'lib/solver/strategies/GridSolver.cpp',
'lib/solver/strategies/triggers/engine_partitioning_trigger.cpp',

View File

@@ -6,14 +6,28 @@
namespace py = pybind11;
void register_config_bindings(pybind11::module &m) {
py::class_<gridfire::config::CVODESolverConfig>(m, "CVODESolverConfig")
py::class_<gridfire::config::BoundaryFluxConfig>(m, "BoundaryFluxConfig")
.def(py::init<>())
.def_readwrite("absTol", &gridfire::config::CVODESolverConfig::absTol)
.def_readwrite("relTol", &gridfire::config::CVODESolverConfig::relTol);
.def_readwrite("relativeThreshold", &gridfire::config::BoundaryFluxConfig::relativeThreshold)
.def_readwrite("absoluteThreshold", &gridfire::config::BoundaryFluxConfig::absoluteThreshold);
py::class_<gridfire::config::TriggerConfig>(m, "TriggerConfig")
.def(py::init<>())
.def_readwrite("offDiagonalThreshold", &gridfire::config::TriggerConfig::offDiagonalThreshold)
.def_readwrite("timestepCollapseRatio", &gridfire::config::TriggerConfig::timestepCollapseRatio)
.def_readwrite("maxConvergenceFailures", &gridfire::config::TriggerConfig::maxConvergenceFailures)
.def_readwrite("boundaryFlux", &gridfire::config::TriggerConfig::boundaryFlux);
py::class_<gridfire::config::PointSolverConfig>(m, "PointSolverConfig")
.def(py::init<>())
.def_readwrite("absTol", &gridfire::config::PointSolverConfig::absTol)
.def_readwrite("relTol", &gridfire::config::PointSolverConfig::relTol)
.def_readwrite("trigger", &gridfire::config::PointSolverConfig::trigger);
py::class_<gridfire::config::SolverConfig>(m, "SolverConfig")
.def(py::init<>())
.def_readwrite("cvode", &gridfire::config::SolverConfig::cvode);
.def_readwrite("pointSolver", &gridfire::config::SolverConfig::pointSolver);
py::class_<gridfire::config::AdaptiveEngineViewConfig>(m, "AdaptiveEngineViewConfig")
.def(py::init<>())

View File

@@ -194,6 +194,25 @@ namespace {
py::arg("ctx"),
py::arg("species"),
"Get the status of a species in the network."
)
.def("constructStateBlob",
&T::constructStateBlob,
py::arg("blob") = std::nullopt,
"Construct the state blob for this engine. Generally base engines (GraphEngine) can call this with no arguments whereas views should take an argument to an already constructed state blob which will be cloned and then the clone will be modified"
)
.def(
"getMostRecentRHSCalculation",
[](const T& self, sp::StateBlob& ctx) -> std::optional<gridfire::engine::StepDerivatives<double>> {
auto result = self.getMostRecentRHSCalculation(ctx);
if (!result.has_value()) {
return std::nullopt;
} else {
return result.value();
}
},
py::arg("ctx"),
"Retrieve the most recent RHS calculation from the engine"
);
}
@@ -529,7 +548,18 @@ void con_stype_register_graph_engine_bindings(const pybind11::module &m) {
&gridfire::engine::GraphEngine::isUsingReverseReactions,
"Check if the engine is using reverse reactions."
);
py_graph_engine_bindings.def(
"addReaction",
py::overload_cast<const gridfire::reaction::Reaction&>(&gridfire::engine::GraphEngine::addReaction),
py::arg("reaction"),
"Add a reaction to the engine's network manually."
);
py_graph_engine_bindings.def(
"addReaction",
py::overload_cast<const std::string&>(&gridfire::engine::GraphEngine::addReaction),
py::arg("reaction_id"),
"Add a reaction to the engine's network manually using a reaction identifier string."
);
// Register the general dynamic engine bindings
registerDynamicEngineDefs<gridfire::engine::GraphEngine, gridfire::engine::DynamicEngine>(py_graph_engine_bindings);
}

View File

@@ -293,6 +293,16 @@ std::optional<gridfire::engine::StepDerivatives<double>> PyDynamicEngine::getMos
);
}
std::unique_ptr<gridfire::engine::scratch::StateBlob> PyDynamicEngine::constructStateBlob(
const gridfire::engine::scratch::StateBlob *blob) const {
PYBIND11_OVERRIDE_PURE(
std::unique_ptr<gridfire::engine::scratch::StateBlob>,
gridfire::engine::DynamicEngine,
constructStateBlob,
blob
);
}
const gridfire::engine::Engine& PyEngineView::getBaseEngine() const {
PYBIND11_OVERRIDE_PURE(
const gridfire::engine::Engine&,

View File

@@ -130,6 +130,10 @@ public:
gridfire::engine::scratch::StateBlob &ctx
) const override;
std::unique_ptr<gridfire::engine::scratch::StateBlob> constructStateBlob(
const gridfire::engine::scratch::StateBlob *blob
) const override;
private:
mutable std::vector<fourdst::atomic::Species> m_species_cache;
};

View File

@@ -51,6 +51,13 @@ void register_solver_bindings(const py::module &m) {
},
py::return_value_policy::reference_internal
);
py_cvode_timestep_context.def_property_readonly(
"composition",
[](const gridfire::solver::PointSolverTimestepContext& self) -> fourdst::composition::Composition {
return self.getPhysicalComposition();
}
);
auto py_solver_context_base = py::class_<gridfire::solver::SolverContextBase>(m, "SolverContextBase");
@@ -166,6 +173,20 @@ void register_solver_bindings(const py::module &m) {
"Initialize the PointSolver object."
);
py_point_solver.def(
py::init<gridfire::engine::DynamicEngine&, gridfire::config::GridFireConfig&>(),
py::arg("engine"),
py::arg("config"),
"Initialize the PointSolver object with a configuration set."
);
py_point_solver.def(
"getConfig",
&gridfire::solver::PointSolver::getConfig,
"Get a copy of the config object"
);
py_point_solver.def(
"evaluate",
py::overload_cast<gridfire::solver::SolverContextBase&, const gridfire::NetIn&, bool, bool>(&gridfire::solver::PointSolver::evaluate, py::const_),

File diff suppressed because one or more lines are too long

View File

@@ -3,7 +3,7 @@ GridFire configuration bindings
"""
from __future__ import annotations
import typing
__all__: list[str] = ['AdaptiveEngineViewConfig', 'CVODESolverConfig', 'EngineConfig', 'EngineViewConfig', 'GridFireConfig', 'SolverConfig']
__all__: list[str] = ['AdaptiveEngineViewConfig', 'BoundaryFluxConfig', 'EngineConfig', 'EngineViewConfig', 'GridFireConfig', 'PointSolverConfig', 'SolverConfig', 'TriggerConfig']
class AdaptiveEngineViewConfig:
def __init__(self) -> None:
...
@@ -13,20 +13,20 @@ class AdaptiveEngineViewConfig:
@relativeCullingThreshold.setter
def relativeCullingThreshold(self, arg0: typing.SupportsFloat) -> None:
...
class CVODESolverConfig:
class BoundaryFluxConfig:
def __init__(self) -> None:
...
@property
def absTol(self) -> float:
def absoluteThreshold(self) -> float:
...
@absTol.setter
def absTol(self, arg0: typing.SupportsFloat) -> None:
@absoluteThreshold.setter
def absoluteThreshold(self, arg0: typing.SupportsFloat) -> None:
...
@property
def relTol(self) -> float:
def relativeThreshold(self) -> float:
...
@relTol.setter
def relTol(self, arg0: typing.SupportsFloat) -> None:
@relativeThreshold.setter
def relativeThreshold(self, arg0: typing.SupportsFloat) -> None:
...
class EngineConfig:
views: EngineViewConfig
@@ -41,7 +41,45 @@ class GridFireConfig:
solver: SolverConfig
def __init__(self) -> None:
...
class SolverConfig:
cvode: CVODESolverConfig
class PointSolverConfig:
trigger: TriggerConfig
def __init__(self) -> None:
...
@property
def absTol(self) -> float:
...
@absTol.setter
def absTol(self, arg0: typing.SupportsFloat) -> None:
...
@property
def relTol(self) -> float:
...
@relTol.setter
def relTol(self, arg0: typing.SupportsFloat) -> None:
...
class SolverConfig:
pointSolver: PointSolverConfig
def __init__(self) -> None:
...
class TriggerConfig:
boundaryFlux: BoundaryFluxConfig
def __init__(self) -> None:
...
@property
def maxConvergenceFailures(self) -> float:
...
@maxConvergenceFailures.setter
def maxConvergenceFailures(self, arg0: typing.SupportsFloat) -> None:
...
@property
def offDiagonalThreshold(self) -> float:
...
@offDiagonalThreshold.setter
def offDiagonalThreshold(self, arg0: typing.SupportsFloat) -> None:
...
@property
def timestepCollapseRatio(self) -> float:
...
@timestepCollapseRatio.setter
def timestepCollapseRatio(self, arg0: typing.SupportsFloat) -> None:
...

View File

@@ -37,6 +37,10 @@ class AdaptiveEngineView(DynamicEngine):
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def constructStateBlob(self, blob: scratchpads.StateBlob = None) -> scratchpads.StateBlob:
"""
Construct the state blob for this engine. Generally base engines (GraphEngine) can call this with no arguments whereas views should take an argument to an already constructed state blob which will be cloned and then the clone will be modified
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
@@ -56,6 +60,10 @@ class AdaptiveEngineView(DynamicEngine):
"""
Get the base engine associated with this adaptive engine view.
"""
def getMostRecentRHSCalculation(self, ctx: scratchpads.StateBlob) -> gridfire._gridfire.engine.StepDerivatives | None:
"""
Retrieve the most recent RHS calculation from the engine
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
@@ -115,6 +123,10 @@ class DefinedEngineView(DynamicEngine):
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def constructStateBlob(self, blob: scratchpads.StateBlob = None) -> scratchpads.StateBlob:
"""
Construct the state blob for this engine. Generally base engines (GraphEngine) can call this with no arguments whereas views should take an argument to an already constructed state blob which will be cloned and then the clone will be modified
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
@@ -134,6 +146,10 @@ class DefinedEngineView(DynamicEngine):
"""
Get the base engine associated with this defined engine view.
"""
def getMostRecentRHSCalculation(self, ctx: scratchpads.StateBlob) -> gridfire._gridfire.engine.StepDerivatives | None:
"""
Retrieve the most recent RHS calculation from the engine
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
@@ -250,6 +266,10 @@ class FileDefinedEngineView(DefinedEngineView):
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def constructStateBlob(self, blob: scratchpads.StateBlob = None) -> scratchpads.StateBlob:
"""
Construct the state blob for this engine. Generally base engines (GraphEngine) can call this with no arguments whereas views should take an argument to an already constructed state blob which will be cloned and then the clone will be modified
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
@@ -269,6 +289,10 @@ class FileDefinedEngineView(DefinedEngineView):
"""
Get the base engine associated with this file defined engine view.
"""
def getMostRecentRHSCalculation(self, ctx: scratchpads.StateBlob) -> gridfire._gridfire.engine.StepDerivatives | None:
"""
Retrieve the most recent RHS calculation from the engine
"""
def getNetworkFile(self) -> str:
"""
Get the network file associated with this defined engine view.
@@ -329,6 +353,16 @@ class GraphEngine(DynamicEngine):
"""
Initialize GraphEngine with a set of reactions.
"""
@typing.overload
def addReaction(self, reaction: ...) -> None:
"""
Add a reaction to the engine's network manually.
"""
@typing.overload
def addReaction(self, reaction_id: str) -> None:
"""
Add a reaction to the engine's network manually using a reaction identifier string.
"""
def calculateEpsDerivatives(self, ctx: scratchpads.StateBlob, comp: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> ...:
"""
Calculate deps/dT and deps/drho
@@ -357,6 +391,10 @@ class GraphEngine(DynamicEngine):
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def constructStateBlob(self, blob: scratchpads.StateBlob = None) -> scratchpads.StateBlob:
"""
Construct the state blob for this engine. Generally base engines (GraphEngine) can call this with no arguments whereas views should take an argument to an already constructed state blob which will be cloned and then the clone will be modified
"""
def exportToCSV(self, ctx: scratchpads.StateBlob, filename: str) -> None:
"""
Export the network to a CSV file for analysis.
@@ -380,6 +418,10 @@ class GraphEngine(DynamicEngine):
"""
Generate the jacobian matrix for the given sparsity pattern
"""
def getMostRecentRHSCalculation(self, ctx: scratchpads.StateBlob) -> gridfire._gridfire.engine.StepDerivatives | None:
"""
Retrieve the most recent RHS calculation from the engine
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
@@ -461,6 +503,10 @@ class MultiscalePartitioningEngineView(DynamicEngine):
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def constructStateBlob(self, blob: scratchpads.StateBlob = None) -> scratchpads.StateBlob:
"""
Construct the state blob for this engine. Generally base engines (GraphEngine) can call this with no arguments whereas views should take an argument to an already constructed state blob which will be cloned and then the clone will be modified
"""
def exportToDot(self, ctx: scratchpads.StateBlob, filename: str, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> None:
"""
Export the network to a DOT file for visualization.
@@ -492,6 +538,10 @@ class MultiscalePartitioningEngineView(DynamicEngine):
"""
Get the list of fast species in the network.
"""
def getMostRecentRHSCalculation(self, ctx: scratchpads.StateBlob) -> gridfire._gridfire.engine.StepDerivatives | None:
"""
Retrieve the most recent RHS calculation from the engine
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
@@ -750,6 +800,10 @@ class NetworkPrimingEngineView(DefinedEngineView):
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def constructStateBlob(self, blob: scratchpads.StateBlob = None) -> scratchpads.StateBlob:
"""
Construct the state blob for this engine. Generally base engines (GraphEngine) can call this with no arguments whereas views should take an argument to an already constructed state blob which will be cloned and then the clone will be modified
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
@@ -769,6 +823,10 @@ class NetworkPrimingEngineView(DefinedEngineView):
"""
Get the base engine associated with this priming engine view.
"""
def getMostRecentRHSCalculation(self, ctx: scratchpads.StateBlob) -> gridfire._gridfire.engine.StepDerivatives | None:
"""
Retrieve the most recent RHS calculation from the engine
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.

View File

@@ -4,6 +4,8 @@ GridFire numerical solver bindings
from __future__ import annotations
import collections.abc
import fourdst._phys.atomic
import fourdst._phys.composition
import gridfire._gridfire.config
import gridfire._gridfire.engine
import gridfire._gridfire.engine.scratchpads
import gridfire._gridfire.type
@@ -47,15 +49,25 @@ class MultiZoneDynamicNetworkSolver:
evaluate the dynamic engine using the dynamic engine class for multiple zones (using openmp if available)
"""
class PointSolver(SingleZoneDynamicNetworkSolver):
@typing.overload
def __init__(self, engine: gridfire._gridfire.engine.DynamicEngine) -> None:
"""
Initialize the PointSolver object.
"""
@typing.overload
def __init__(self, engine: gridfire._gridfire.engine.DynamicEngine, config: gridfire._gridfire.config.GridFireConfig) -> None:
"""
Initialize the PointSolver object with a configuration set.
"""
def evaluate(self, solver_ctx: SolverContextBase, netIn: gridfire._gridfire.type.NetIn, display_trigger: bool = False, force_reinitialization: bool = False) -> gridfire._gridfire.type.NetOut:
"""
evaluate the dynamic engine using the dynamic engine class
"""
class PointSolverContext:
def getConfig(self) -> gridfire._gridfire.config.GridFireConfig:
"""
Get a copy of the config object
"""
class PointSolverContext(SolverContextBase):
callback: collections.abc.Callable[[PointSolverTimestepContext], None] | None
detailed_logging: bool
stdout_logging: bool
@@ -116,6 +128,9 @@ class PointSolverTimestepContext:
def T9(self) -> float:
...
@property
def composition(self) -> fourdst._phys.composition.Composition:
...
@property
def currentConvergenceFailures(self) -> int:
...
@property

File diff suppressed because one or more lines are too long

View File

@@ -1,16 +0,0 @@
"""
Python bindings for the fourdst utility modules which are a part of the 4D-STAR project.
"""
from __future__ import annotations
from . import config
from . import engine
from . import exceptions
from . import io
from . import partition
from . import policy
from . import reaction
from . import screening
from . import solver
from . import type
from . import utils
__all__: list[str] = ['config', 'engine', 'exceptions', 'io', 'partition', 'policy', 'reaction', 'screening', 'solver', 'type', 'utils']

View File

@@ -1,47 +0,0 @@
"""
GridFire configuration bindings
"""
from __future__ import annotations
import typing
__all__: list[str] = ['AdaptiveEngineViewConfig', 'CVODESolverConfig', 'EngineConfig', 'EngineViewConfig', 'GridFireConfig', 'SolverConfig']
class AdaptiveEngineViewConfig:
def __init__(self) -> None:
...
@property
def relativeCullingThreshold(self) -> float:
...
@relativeCullingThreshold.setter
def relativeCullingThreshold(self, arg0: typing.SupportsFloat) -> None:
...
class CVODESolverConfig:
def __init__(self) -> None:
...
@property
def absTol(self) -> float:
...
@absTol.setter
def absTol(self, arg0: typing.SupportsFloat) -> None:
...
@property
def relTol(self) -> float:
...
@relTol.setter
def relTol(self, arg0: typing.SupportsFloat) -> None:
...
class EngineConfig:
views: EngineViewConfig
def __init__(self) -> None:
...
class EngineViewConfig:
adaptiveEngineView: AdaptiveEngineViewConfig
def __init__(self) -> None:
...
class GridFireConfig:
engine: EngineConfig
solver: SolverConfig
def __init__(self) -> None:
...
class SolverConfig:
cvode: CVODESolverConfig
def __init__(self) -> None:
...

View File

@@ -1,972 +0,0 @@
"""
Engine and Engine View bindings
"""
from __future__ import annotations
import collections.abc
import fourdst._phys.atomic
import fourdst._phys.composition
import gridfire._gridfire.io
import gridfire._gridfire.partition
import gridfire._gridfire.reaction
import gridfire._gridfire.screening
import gridfire._gridfire.type
import numpy
import numpy.typing
import typing
from . import diagnostics
from . import scratchpads
__all__: list[str] = ['ACTIVE', 'ADAPTIVE_ENGINE_VIEW', 'AdaptiveEngineView', 'BuildDepthType', 'DEFAULT', 'DEFINED_ENGINE_VIEW', 'DefinedEngineView', 'DynamicEngine', 'EQUILIBRIUM', 'Engine', 'EngineTypes', 'FILE_DEFINED_ENGINE_VIEW', 'FULL_SUCCESS', 'FifthOrder', 'FileDefinedEngineView', 'FourthOrder', 'Full', 'GRAPH_ENGINE', 'GraphEngine', 'INACTIVE_FLOW', 'MAX_ITERATIONS_REACHED', 'MULTISCALE_PARTITIONING_ENGINE_VIEW', 'MultiscalePartitioningEngineView', 'NONE', 'NOT_PRESENT', 'NO_SPECIES_TO_PRIME', 'NetworkBuildDepth', 'NetworkConstructionFlags', 'NetworkJacobian', 'NetworkPrimingEngineView', 'PRIMING_ENGINE_VIEW', 'PrimingReport', 'PrimingReportStatus', 'REACLIB', 'REACLIB_STRONG', 'REACLIB_WEAK', 'SecondOrder', 'Shallow', 'SparsityPattern', 'SpeciesStatus', 'StepDerivatives', 'ThirdOrder', 'WRL_BETA_MINUS', 'WRL_BETA_PLUS', 'WRL_ELECTRON_CAPTURE', 'WRL_POSITRON_CAPTURE', 'WRL_WEAK', 'build_nuclear_network', 'diagnostics', 'primeNetwork', 'regularize_jacobian', 'scratchpads']
class AdaptiveEngineView(DynamicEngine):
def __init__(self, baseEngine: DynamicEngine) -> None:
"""
Construct an adaptive engine view with a base engine.
"""
def calculateEpsDerivatives(self, ctx: scratchpads.StateBlob, comp: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> ...:
"""
Calculate deps/dT and deps/drho
"""
def calculateMolarReactionFlow(self: DynamicEngine, ctx: scratchpads.StateBlob, reaction: ..., comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> float:
"""
Calculate the molar reaction flow for a given reaction.
"""
def calculateRHSAndEnergy(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> StepDerivatives:
"""
Calculate the right-hand side (dY/dt) and energy generation rate.
"""
def collectComposition(self, ctx: scratchpads.StateBlob, composition: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> fourdst._phys.composition.Composition:
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
Generate the Jacobian matrix for the current state.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeSpecies: collections.abc.Sequence[fourdst._phys.atomic.Species]) -> NetworkJacobian:
"""
Generate the jacobian matrix only for the subset of the matrix representing the active species.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, sparsityPattern: collections.abc.Sequence[tuple[typing.SupportsInt, typing.SupportsInt]]) -> NetworkJacobian:
"""
Generate the jacobian matrix for the given sparsity pattern
"""
def getBaseEngine(self) -> DynamicEngine:
"""
Get the base engine associated with this adaptive engine view.
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
"""
def getNetworkSpecies(self, arg0: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of species in the network.
"""
def getScreeningModel(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.screening.ScreeningType:
"""
Get the current screening model of the engine.
"""
def getSpeciesDestructionTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the destruction timescales for each species in the network.
"""
def getSpeciesIndex(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> int:
"""
Get the index of a species in the network.
"""
def getSpeciesStatus(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> SpeciesStatus:
"""
Get the status of a species in the network.
"""
def getSpeciesTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the timescales for each species in the network.
"""
def primeEngine(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> PrimingReport:
"""
Prime the engine with a NetIn object to prepare for calculations.
"""
def project(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> fourdst._phys.composition.Composition:
"""
Update the engine state based on the provided NetIn object.
"""
class BuildDepthType:
pass
class DefinedEngineView(DynamicEngine):
def __init__(self, peNames: collections.abc.Sequence[str], baseEngine: GraphEngine) -> None:
"""
Construct a defined engine view with a list of tracked reactions and a base engine.
"""
def calculateEpsDerivatives(self, ctx: scratchpads.StateBlob, comp: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> ...:
"""
Calculate deps/dT and deps/drho
"""
def calculateMolarReactionFlow(self: DynamicEngine, ctx: scratchpads.StateBlob, reaction: ..., comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> float:
"""
Calculate the molar reaction flow for a given reaction.
"""
def calculateRHSAndEnergy(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> StepDerivatives:
"""
Calculate the right-hand side (dY/dt) and energy generation rate.
"""
def collectComposition(self, ctx: scratchpads.StateBlob, composition: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> fourdst._phys.composition.Composition:
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
Generate the Jacobian matrix for the current state.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeSpecies: collections.abc.Sequence[fourdst._phys.atomic.Species]) -> NetworkJacobian:
"""
Generate the jacobian matrix only for the subset of the matrix representing the active species.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, sparsityPattern: collections.abc.Sequence[tuple[typing.SupportsInt, typing.SupportsInt]]) -> NetworkJacobian:
"""
Generate the jacobian matrix for the given sparsity pattern
"""
def getBaseEngine(self) -> DynamicEngine:
"""
Get the base engine associated with this defined engine view.
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
"""
def getNetworkSpecies(self, arg0: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of species in the network.
"""
def getScreeningModel(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.screening.ScreeningType:
"""
Get the current screening model of the engine.
"""
def getSpeciesDestructionTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the destruction timescales for each species in the network.
"""
def getSpeciesIndex(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> int:
"""
Get the index of a species in the network.
"""
def getSpeciesStatus(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> SpeciesStatus:
"""
Get the status of a species in the network.
"""
def getSpeciesTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the timescales for each species in the network.
"""
def primeEngine(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> PrimingReport:
"""
Prime the engine with a NetIn object to prepare for calculations.
"""
def project(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> fourdst._phys.composition.Composition:
"""
Update the engine state based on the provided NetIn object.
"""
class DynamicEngine:
pass
class Engine:
pass
class EngineTypes:
"""
Members:
GRAPH_ENGINE : The standard graph-based engine.
ADAPTIVE_ENGINE_VIEW : An engine that adapts based on certain criteria.
MULTISCALE_PARTITIONING_ENGINE_VIEW : An engine that partitions the system at multiple scales.
PRIMING_ENGINE_VIEW : An engine that uses a priming strategy for simulations.
DEFINED_ENGINE_VIEW : An engine defined by user specifications.
FILE_DEFINED_ENGINE_VIEW : An engine defined through external files.
"""
ADAPTIVE_ENGINE_VIEW: typing.ClassVar[EngineTypes] # value = <EngineTypes.ADAPTIVE_ENGINE_VIEW: 1>
DEFINED_ENGINE_VIEW: typing.ClassVar[EngineTypes] # value = <EngineTypes.DEFINED_ENGINE_VIEW: 4>
FILE_DEFINED_ENGINE_VIEW: typing.ClassVar[EngineTypes] # value = <EngineTypes.FILE_DEFINED_ENGINE_VIEW: 5>
GRAPH_ENGINE: typing.ClassVar[EngineTypes] # value = <EngineTypes.GRAPH_ENGINE: 0>
MULTISCALE_PARTITIONING_ENGINE_VIEW: typing.ClassVar[EngineTypes] # value = <EngineTypes.MULTISCALE_PARTITIONING_ENGINE_VIEW: 2>
PRIMING_ENGINE_VIEW: typing.ClassVar[EngineTypes] # value = <EngineTypes.PRIMING_ENGINE_VIEW: 3>
__members__: typing.ClassVar[dict[str, EngineTypes]] # value = {'GRAPH_ENGINE': <EngineTypes.GRAPH_ENGINE: 0>, 'ADAPTIVE_ENGINE_VIEW': <EngineTypes.ADAPTIVE_ENGINE_VIEW: 1>, 'MULTISCALE_PARTITIONING_ENGINE_VIEW': <EngineTypes.MULTISCALE_PARTITIONING_ENGINE_VIEW: 2>, 'PRIMING_ENGINE_VIEW': <EngineTypes.PRIMING_ENGINE_VIEW: 3>, 'DEFINED_ENGINE_VIEW': <EngineTypes.DEFINED_ENGINE_VIEW: 4>, 'FILE_DEFINED_ENGINE_VIEW': <EngineTypes.FILE_DEFINED_ENGINE_VIEW: 5>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
@typing.overload
def __repr__(self) -> str:
...
@typing.overload
def __repr__(self) -> str:
"""
String representation of the EngineTypes.
"""
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class FileDefinedEngineView(DefinedEngineView):
def __init__(self, baseEngine: GraphEngine, fileName: str, parser: gridfire._gridfire.io.NetworkFileParser) -> None:
"""
Construct a defined engine view from a file and a base engine.
"""
def calculateEpsDerivatives(self, ctx: scratchpads.StateBlob, comp: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> ...:
"""
Calculate deps/dT and deps/drho
"""
def calculateMolarReactionFlow(self: DynamicEngine, ctx: scratchpads.StateBlob, reaction: ..., comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> float:
"""
Calculate the molar reaction flow for a given reaction.
"""
def calculateRHSAndEnergy(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> StepDerivatives:
"""
Calculate the right-hand side (dY/dt) and energy generation rate.
"""
def collectComposition(self, ctx: scratchpads.StateBlob, composition: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> fourdst._phys.composition.Composition:
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
Generate the Jacobian matrix for the current state.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeSpecies: collections.abc.Sequence[fourdst._phys.atomic.Species]) -> NetworkJacobian:
"""
Generate the jacobian matrix only for the subset of the matrix representing the active species.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, sparsityPattern: collections.abc.Sequence[tuple[typing.SupportsInt, typing.SupportsInt]]) -> NetworkJacobian:
"""
Generate the jacobian matrix for the given sparsity pattern
"""
def getBaseEngine(self) -> DynamicEngine:
"""
Get the base engine associated with this file defined engine view.
"""
def getNetworkFile(self) -> str:
"""
Get the network file associated with this defined engine view.
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
"""
def getNetworkSpecies(self, arg0: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of species in the network.
"""
def getParser(self) -> gridfire._gridfire.io.NetworkFileParser:
"""
Get the parser used for this defined engine view.
"""
def getScreeningModel(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.screening.ScreeningType:
"""
Get the current screening model of the engine.
"""
def getSpeciesDestructionTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the destruction timescales for each species in the network.
"""
def getSpeciesIndex(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> int:
"""
Get the index of a species in the network.
"""
def getSpeciesStatus(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> SpeciesStatus:
"""
Get the status of a species in the network.
"""
def getSpeciesTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the timescales for each species in the network.
"""
def primeEngine(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> PrimingReport:
"""
Prime the engine with a NetIn object to prepare for calculations.
"""
def project(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> fourdst._phys.composition.Composition:
"""
Update the engine state based on the provided NetIn object.
"""
class GraphEngine(DynamicEngine):
@typing.overload
def __init__(self, composition: fourdst._phys.composition.Composition, depth: gridfire._gridfire.engine.NetworkBuildDepth | typing.SupportsInt = ...) -> None:
"""
Initialize GraphEngine with a composition and build depth.
"""
@typing.overload
def __init__(self, composition: fourdst._phys.composition.Composition, partitionFunction: gridfire._gridfire.partition.PartitionFunction, depth: gridfire._gridfire.engine.NetworkBuildDepth | typing.SupportsInt = ...) -> None:
"""
Initialize GraphEngine with a composition, partition function and build depth.
"""
@typing.overload
def __init__(self, reactions: gridfire._gridfire.reaction.ReactionSet) -> None:
"""
Initialize GraphEngine with a set of reactions.
"""
def calculateEpsDerivatives(self, ctx: scratchpads.StateBlob, comp: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> ...:
"""
Calculate deps/dT and deps/drho
"""
def calculateMolarReactionFlow(self: DynamicEngine, ctx: scratchpads.StateBlob, reaction: ..., comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> float:
"""
Calculate the molar reaction flow for a given reaction.
"""
def calculateRHSAndEnergy(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> StepDerivatives:
"""
Calculate the right-hand side (dY/dt) and energy generation rate.
"""
def calculateReverseRate(self, reaction: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat, composition: ...) -> float:
"""
Calculate the reverse rate for a given reaction at a specific temperature, density, and composition.
"""
def calculateReverseRateTwoBody(self, reaction: ..., T9: typing.SupportsFloat, forwardRate: typing.SupportsFloat, expFactor: typing.SupportsFloat) -> float:
"""
Calculate the reverse rate for a two-body reaction at a specific temperature.
"""
def calculateReverseRateTwoBodyDerivative(self, reaction: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat, composition: fourdst._phys.composition.Composition, reverseRate: typing.SupportsFloat) -> float:
"""
Calculate the derivative of the reverse rate for a two-body reaction at a specific temperature.
"""
def collectComposition(self, ctx: scratchpads.StateBlob, composition: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> fourdst._phys.composition.Composition:
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def exportToCSV(self, ctx: scratchpads.StateBlob, filename: str) -> None:
"""
Export the network to a CSV file for analysis.
"""
def exportToDot(self, ctx: scratchpads.StateBlob, filename: str) -> None:
"""
Export the network to a DOT file for visualization.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
Generate the Jacobian matrix for the current state.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeSpecies: collections.abc.Sequence[fourdst._phys.atomic.Species]) -> NetworkJacobian:
"""
Generate the jacobian matrix only for the subset of the matrix representing the active species.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, sparsityPattern: collections.abc.Sequence[tuple[typing.SupportsInt, typing.SupportsInt]]) -> NetworkJacobian:
"""
Generate the jacobian matrix for the given sparsity pattern
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
"""
def getNetworkSpecies(self, arg0: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of species in the network.
"""
def getPartitionFunction(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.partition.PartitionFunction:
"""
Get the partition function used by the engine.
"""
def getScreeningModel(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.screening.ScreeningType:
"""
Get the current screening model of the engine.
"""
@typing.overload
def getSpeciesDestructionTimescales(self, ctx: scratchpads.StateBlob, composition: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeReactions: gridfire._gridfire.reaction.ReactionSet) -> ...:
...
@typing.overload
def getSpeciesDestructionTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the destruction timescales for each species in the network.
"""
def getSpeciesIndex(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> int:
"""
Get the index of a species in the network.
"""
def getSpeciesStatus(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> SpeciesStatus:
"""
Get the status of a species in the network.
"""
@typing.overload
def getSpeciesTimescales(self, ctx: scratchpads.StateBlob, composition: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeReactions: gridfire._gridfire.reaction.ReactionSet) -> ...:
...
@typing.overload
def getSpeciesTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the timescales for each species in the network.
"""
def involvesSpecies(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if a given species is involved in the network.
"""
def isPrecomputationEnabled(self, arg0: scratchpads.StateBlob) -> bool:
"""
Check if precomputation is enabled for the engine.
"""
def isUsingReverseReactions(self, arg0: scratchpads.StateBlob) -> bool:
"""
Check if the engine is using reverse reactions.
"""
def primeEngine(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> PrimingReport:
"""
Prime the engine with a NetIn object to prepare for calculations.
"""
def project(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> fourdst._phys.composition.Composition:
"""
Update the engine state based on the provided NetIn object.
"""
class MultiscalePartitioningEngineView(DynamicEngine):
def __init__(self, baseEngine: GraphEngine) -> None:
"""
Construct a multiscale partitioning engine view with a base engine.
"""
def calculateEpsDerivatives(self, ctx: scratchpads.StateBlob, comp: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> ...:
"""
Calculate deps/dT and deps/drho
"""
def calculateMolarReactionFlow(self: DynamicEngine, ctx: scratchpads.StateBlob, reaction: ..., comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> float:
"""
Calculate the molar reaction flow for a given reaction.
"""
def calculateRHSAndEnergy(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> StepDerivatives:
"""
Calculate the right-hand side (dY/dt) and energy generation rate.
"""
def collectComposition(self, ctx: scratchpads.StateBlob, composition: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> fourdst._phys.composition.Composition:
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
def exportToDot(self, ctx: scratchpads.StateBlob, filename: str, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> None:
"""
Export the network to a DOT file for visualization.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
Generate the Jacobian matrix for the current state.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeSpecies: collections.abc.Sequence[fourdst._phys.atomic.Species]) -> NetworkJacobian:
"""
Generate the jacobian matrix only for the subset of the matrix representing the active species.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, sparsityPattern: collections.abc.Sequence[tuple[typing.SupportsInt, typing.SupportsInt]]) -> NetworkJacobian:
"""
Generate the jacobian matrix for the given sparsity pattern
"""
def getBaseEngine(self) -> DynamicEngine:
"""
Get the base engine associated with this multiscale partitioning engine view.
"""
def getDynamicSpecies(self: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of dynamic species in the network.
"""
def getFastSpecies(self, arg0: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of fast species in the network.
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
"""
def getNetworkSpecies(self, arg0: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of species in the network.
"""
def getNormalizedEquilibratedComposition(self, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> fourdst._phys.composition.Composition:
"""
Get the normalized equilibrated composition for the algebraic species.
"""
def getScreeningModel(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.screening.ScreeningType:
"""
Get the current screening model of the engine.
"""
def getSpeciesDestructionTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the destruction timescales for each species in the network.
"""
def getSpeciesIndex(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> int:
"""
Get the index of a species in the network.
"""
def getSpeciesStatus(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> SpeciesStatus:
"""
Get the status of a species in the network.
"""
def getSpeciesTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the timescales for each species in the network.
"""
def involvesSpecies(self: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if a given species is involved in the network (in either the algebraic or dynamic set).
"""
def involvesSpeciesInDynamic(self: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if a given species is involved in the network's dynamic set.
"""
def involvesSpeciesInQSE(self: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if a given species is involved in the network's algebraic set.
"""
def partitionNetwork(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> fourdst._phys.composition.Composition:
"""
Partition the network based on species timescales and connectivity.
"""
def primeEngine(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> PrimingReport:
"""
Prime the engine with a NetIn object to prepare for calculations.
"""
def project(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> fourdst._phys.composition.Composition:
"""
Update the engine state based on the provided NetIn object.
"""
class NetworkBuildDepth:
"""
Members:
Full : Full network build depth
Shallow : Shallow network build depth
SecondOrder : Second order network build depth
ThirdOrder : Third order network build depth
FourthOrder : Fourth order network build depth
FifthOrder : Fifth order network build depth
"""
FifthOrder: typing.ClassVar[NetworkBuildDepth] # value = <NetworkBuildDepth.FifthOrder: 5>
FourthOrder: typing.ClassVar[NetworkBuildDepth] # value = <NetworkBuildDepth.FourthOrder: 4>
Full: typing.ClassVar[NetworkBuildDepth] # value = <NetworkBuildDepth.Full: -1>
SecondOrder: typing.ClassVar[NetworkBuildDepth] # value = <NetworkBuildDepth.SecondOrder: 2>
Shallow: typing.ClassVar[NetworkBuildDepth] # value = <NetworkBuildDepth.Shallow: 1>
ThirdOrder: typing.ClassVar[NetworkBuildDepth] # value = <NetworkBuildDepth.ThirdOrder: 3>
__members__: typing.ClassVar[dict[str, NetworkBuildDepth]] # value = {'Full': <NetworkBuildDepth.Full: -1>, 'Shallow': <NetworkBuildDepth.Shallow: 1>, 'SecondOrder': <NetworkBuildDepth.SecondOrder: 2>, 'ThirdOrder': <NetworkBuildDepth.ThirdOrder: 3>, 'FourthOrder': <NetworkBuildDepth.FourthOrder: 4>, 'FifthOrder': <NetworkBuildDepth.FifthOrder: 5>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class NetworkConstructionFlags:
"""
Members:
NONE : No special construction flags.
REACLIB_STRONG : Include strong reactions from reaclib.
WRL_BETA_MINUS : Include beta-minus decay reactions from weak rate library.
WRL_BETA_PLUS : Include beta-plus decay reactions from weak rate library.
WRL_ELECTRON_CAPTURE : Include electron capture reactions from weak rate library.
WRL_POSITRON_CAPTURE : Include positron capture reactions from weak rate library.
REACLIB_WEAK : Include weak reactions from reaclib.
WRL_WEAK : Include all weak reactions from weak rate library.
REACLIB : Include all reactions from reaclib.
DEFAULT : Default construction flags (Reaclib strong and weak).
"""
DEFAULT: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.REACLIB: 33>
NONE: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.NONE: 0>
REACLIB: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.REACLIB: 33>
REACLIB_STRONG: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.REACLIB_STRONG: 1>
REACLIB_WEAK: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.REACLIB_WEAK: 32>
WRL_BETA_MINUS: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.WRL_BETA_MINUS: 2>
WRL_BETA_PLUS: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.WRL_BETA_PLUS: 4>
WRL_ELECTRON_CAPTURE: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.WRL_ELECTRON_CAPTURE: 8>
WRL_POSITRON_CAPTURE: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.WRL_POSITRON_CAPTURE: 16>
WRL_WEAK: typing.ClassVar[NetworkConstructionFlags] # value = <NetworkConstructionFlags.WRL_WEAK: 30>
__members__: typing.ClassVar[dict[str, NetworkConstructionFlags]] # value = {'NONE': <NetworkConstructionFlags.NONE: 0>, 'REACLIB_STRONG': <NetworkConstructionFlags.REACLIB_STRONG: 1>, 'WRL_BETA_MINUS': <NetworkConstructionFlags.WRL_BETA_MINUS: 2>, 'WRL_BETA_PLUS': <NetworkConstructionFlags.WRL_BETA_PLUS: 4>, 'WRL_ELECTRON_CAPTURE': <NetworkConstructionFlags.WRL_ELECTRON_CAPTURE: 8>, 'WRL_POSITRON_CAPTURE': <NetworkConstructionFlags.WRL_POSITRON_CAPTURE: 16>, 'REACLIB_WEAK': <NetworkConstructionFlags.REACLIB_WEAK: 32>, 'WRL_WEAK': <NetworkConstructionFlags.WRL_WEAK: 30>, 'REACLIB': <NetworkConstructionFlags.REACLIB: 33>, 'DEFAULT': <NetworkConstructionFlags.REACLIB: 33>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
@typing.overload
def __repr__(self) -> str:
...
@typing.overload
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class NetworkJacobian:
@typing.overload
def __getitem__(self, key: tuple[fourdst._phys.atomic.Species, fourdst._phys.atomic.Species]) -> float:
"""
Get an entry from the Jacobian matrix using species identifiers.
"""
@typing.overload
def __getitem__(self, key: tuple[typing.SupportsInt, typing.SupportsInt]) -> float:
"""
Get an entry from the Jacobian matrix using indices.
"""
@typing.overload
def __setitem__(self, key: tuple[fourdst._phys.atomic.Species, fourdst._phys.atomic.Species], value: typing.SupportsFloat) -> None:
"""
Set an entry in the Jacobian matrix using species identifiers.
"""
@typing.overload
def __setitem__(self, key: tuple[typing.SupportsInt, typing.SupportsInt], value: typing.SupportsFloat) -> None:
"""
Set an entry in the Jacobian matrix using indices.
"""
def data(self) -> ...:
"""
Get the underlying sparse matrix data.
"""
def infs(self) -> list[tuple[tuple[fourdst._phys.atomic.Species, fourdst._phys.atomic.Species], float]]:
"""
Get all infinite entries in the Jacobian matrix.
"""
def mapping(self) -> dict[fourdst._phys.atomic.Species, int]:
"""
Get the species-to-index mapping.
"""
def nans(self) -> list[tuple[tuple[fourdst._phys.atomic.Species, fourdst._phys.atomic.Species], float]]:
"""
Get all NaN entries in the Jacobian matrix.
"""
def nnz(self) -> int:
"""
Get the number of non-zero entries in the Jacobian matrix.
"""
def rank(self) -> int:
"""
Get the rank of the Jacobian matrix.
"""
def shape(self) -> tuple[int, int]:
"""
Get the shape of the Jacobian matrix as (rows, columns).
"""
def singular(self) -> bool:
"""
Check if the Jacobian matrix is singular.
"""
def to_csv(self, filename: str) -> None:
"""
Export the Jacobian matrix to a CSV file.
"""
def to_numpy(self) -> numpy.typing.NDArray[numpy.float64]:
"""
Convert the Jacobian matrix to a NumPy array.
"""
class NetworkPrimingEngineView(DefinedEngineView):
@typing.overload
def __init__(self, ctx: scratchpads.StateBlob, primingSymbol: str, baseEngine: GraphEngine) -> None:
"""
Construct a priming engine view with a priming symbol and a base engine.
"""
@typing.overload
def __init__(self, ctx: scratchpads.StateBlob, primingSpecies: fourdst._phys.atomic.Species, baseEngine: GraphEngine) -> None:
"""
Construct a priming engine view with a priming species and a base engine.
"""
def calculateEpsDerivatives(self, ctx: scratchpads.StateBlob, comp: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> ...:
"""
Calculate deps/dT and deps/drho
"""
def calculateMolarReactionFlow(self: DynamicEngine, ctx: scratchpads.StateBlob, reaction: ..., comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> float:
"""
Calculate the molar reaction flow for a given reaction.
"""
def calculateRHSAndEnergy(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> StepDerivatives:
"""
Calculate the right-hand side (dY/dt) and energy generation rate.
"""
def collectComposition(self, ctx: scratchpads.StateBlob, composition: ..., T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> fourdst._phys.composition.Composition:
"""
Recursively collect composition from current engine and any sub engines if they exist.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> NetworkJacobian:
"""
Generate the Jacobian matrix for the current state.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, activeSpecies: collections.abc.Sequence[fourdst._phys.atomic.Species]) -> NetworkJacobian:
"""
Generate the jacobian matrix only for the subset of the matrix representing the active species.
"""
@typing.overload
def generateJacobianMatrix(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, sparsityPattern: collections.abc.Sequence[tuple[typing.SupportsInt, typing.SupportsInt]]) -> NetworkJacobian:
"""
Generate the jacobian matrix for the given sparsity pattern
"""
def getBaseEngine(self) -> DynamicEngine:
"""
Get the base engine associated with this priming engine view.
"""
def getNetworkReactions(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of logical reactions in the network.
"""
def getNetworkSpecies(self, arg0: scratchpads.StateBlob) -> list[fourdst._phys.atomic.Species]:
"""
Get the list of species in the network.
"""
def getScreeningModel(self, arg0: scratchpads.StateBlob) -> gridfire._gridfire.screening.ScreeningType:
"""
Get the current screening model of the engine.
"""
def getSpeciesDestructionTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the destruction timescales for each species in the network.
"""
def getSpeciesIndex(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> int:
"""
Get the index of a species in the network.
"""
def getSpeciesStatus(self, ctx: scratchpads.StateBlob, species: fourdst._phys.atomic.Species) -> SpeciesStatus:
"""
Get the status of a species in the network.
"""
def getSpeciesTimescales(self: DynamicEngine, ctx: scratchpads.StateBlob, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> dict[fourdst._phys.atomic.Species, float]:
"""
Get the timescales for each species in the network.
"""
def primeEngine(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> PrimingReport:
"""
Prime the engine with a NetIn object to prepare for calculations.
"""
def project(self, ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn) -> fourdst._phys.composition.Composition:
"""
Update the engine state based on the provided NetIn object.
"""
class PrimingReport:
def __repr__(self) -> str:
...
@property
def primedComposition(self) -> fourdst._phys.composition.Composition:
"""
The composition after priming.
"""
@property
def status(self) -> PrimingReportStatus:
"""
Status message from the priming process.
"""
@property
def success(self) -> bool:
"""
Indicates if the priming was successful.
"""
class PrimingReportStatus:
"""
Members:
FULL_SUCCESS : Priming was full successful.
NO_SPECIES_TO_PRIME : Solver Failed to converge during priming.
MAX_ITERATIONS_REACHED : Engine has already been primed.
"""
FULL_SUCCESS: typing.ClassVar[PrimingReportStatus] # value = <PrimingReportStatus.FULL_SUCCESS: 0>
MAX_ITERATIONS_REACHED: typing.ClassVar[PrimingReportStatus] # value = <PrimingReportStatus.MAX_ITERATIONS_REACHED: 1>
NO_SPECIES_TO_PRIME: typing.ClassVar[PrimingReportStatus] # value = <PrimingReportStatus.NO_SPECIES_TO_PRIME: 2>
__members__: typing.ClassVar[dict[str, PrimingReportStatus]] # value = {'FULL_SUCCESS': <PrimingReportStatus.FULL_SUCCESS: 0>, 'NO_SPECIES_TO_PRIME': <PrimingReportStatus.NO_SPECIES_TO_PRIME: 2>, 'MAX_ITERATIONS_REACHED': <PrimingReportStatus.MAX_ITERATIONS_REACHED: 1>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
@typing.overload
def __repr__(self) -> str:
...
@typing.overload
def __repr__(self) -> str:
"""
String representation of the PrimingReport.
"""
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class SparsityPattern:
pass
class SpeciesStatus:
"""
Members:
ACTIVE : Species is active in the network.
EQUILIBRIUM : Species is in equilibrium.
INACTIVE_FLOW : Species is inactive due to flow.
NOT_PRESENT : Species is not present in the network.
"""
ACTIVE: typing.ClassVar[SpeciesStatus] # value = <SpeciesStatus.ACTIVE: 0>
EQUILIBRIUM: typing.ClassVar[SpeciesStatus] # value = <SpeciesStatus.EQUILIBRIUM: 1>
INACTIVE_FLOW: typing.ClassVar[SpeciesStatus] # value = <SpeciesStatus.INACTIVE_FLOW: 2>
NOT_PRESENT: typing.ClassVar[SpeciesStatus] # value = <SpeciesStatus.NOT_PRESENT: 3>
__members__: typing.ClassVar[dict[str, SpeciesStatus]] # value = {'ACTIVE': <SpeciesStatus.ACTIVE: 0>, 'EQUILIBRIUM': <SpeciesStatus.EQUILIBRIUM: 1>, 'INACTIVE_FLOW': <SpeciesStatus.INACTIVE_FLOW: 2>, 'NOT_PRESENT': <SpeciesStatus.NOT_PRESENT: 3>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
@typing.overload
def __repr__(self) -> str:
...
@typing.overload
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class StepDerivatives:
@property
def dYdt(self) -> dict[fourdst._phys.atomic.Species, float]:
"""
The right-hand side (dY/dt) of the ODE system.
"""
@property
def energy(self) -> float:
"""
The energy generation rate.
"""
def build_nuclear_network(composition: ..., weakInterpolator: ..., maxLayers: gridfire._gridfire.engine.NetworkBuildDepth | typing.SupportsInt = ..., ReactionTypes: NetworkConstructionFlags = ...) -> gridfire._gridfire.reaction.ReactionSet:
"""
Build a nuclear network from a composition using all archived reaction data.
"""
def primeNetwork(ctx: scratchpads.StateBlob, netIn: gridfire._gridfire.type.NetIn, engine: ..., ignoredReactionTypes: collections.abc.Sequence[...] | None = None) -> PrimingReport:
"""
Prime a network with a short timescale ignition
"""
def regularize_jacobian(jacobian: NetworkJacobian, composition: fourdst._phys.composition.Composition) -> NetworkJacobian:
"""
regularize_jacobian
"""
ACTIVE: SpeciesStatus # value = <SpeciesStatus.ACTIVE: 0>
ADAPTIVE_ENGINE_VIEW: EngineTypes # value = <EngineTypes.ADAPTIVE_ENGINE_VIEW: 1>
DEFAULT: NetworkConstructionFlags # value = <NetworkConstructionFlags.REACLIB: 33>
DEFINED_ENGINE_VIEW: EngineTypes # value = <EngineTypes.DEFINED_ENGINE_VIEW: 4>
EQUILIBRIUM: SpeciesStatus # value = <SpeciesStatus.EQUILIBRIUM: 1>
FILE_DEFINED_ENGINE_VIEW: EngineTypes # value = <EngineTypes.FILE_DEFINED_ENGINE_VIEW: 5>
FULL_SUCCESS: PrimingReportStatus # value = <PrimingReportStatus.FULL_SUCCESS: 0>
FifthOrder: NetworkBuildDepth # value = <NetworkBuildDepth.FifthOrder: 5>
FourthOrder: NetworkBuildDepth # value = <NetworkBuildDepth.FourthOrder: 4>
Full: NetworkBuildDepth # value = <NetworkBuildDepth.Full: -1>
GRAPH_ENGINE: EngineTypes # value = <EngineTypes.GRAPH_ENGINE: 0>
INACTIVE_FLOW: SpeciesStatus # value = <SpeciesStatus.INACTIVE_FLOW: 2>
MAX_ITERATIONS_REACHED: PrimingReportStatus # value = <PrimingReportStatus.MAX_ITERATIONS_REACHED: 1>
MULTISCALE_PARTITIONING_ENGINE_VIEW: EngineTypes # value = <EngineTypes.MULTISCALE_PARTITIONING_ENGINE_VIEW: 2>
NONE: NetworkConstructionFlags # value = <NetworkConstructionFlags.NONE: 0>
NOT_PRESENT: SpeciesStatus # value = <SpeciesStatus.NOT_PRESENT: 3>
NO_SPECIES_TO_PRIME: PrimingReportStatus # value = <PrimingReportStatus.NO_SPECIES_TO_PRIME: 2>
PRIMING_ENGINE_VIEW: EngineTypes # value = <EngineTypes.PRIMING_ENGINE_VIEW: 3>
REACLIB: NetworkConstructionFlags # value = <NetworkConstructionFlags.REACLIB: 33>
REACLIB_STRONG: NetworkConstructionFlags # value = <NetworkConstructionFlags.REACLIB_STRONG: 1>
REACLIB_WEAK: NetworkConstructionFlags # value = <NetworkConstructionFlags.REACLIB_WEAK: 32>
SecondOrder: NetworkBuildDepth # value = <NetworkBuildDepth.SecondOrder: 2>
Shallow: NetworkBuildDepth # value = <NetworkBuildDepth.Shallow: 1>
ThirdOrder: NetworkBuildDepth # value = <NetworkBuildDepth.ThirdOrder: 3>
WRL_BETA_MINUS: NetworkConstructionFlags # value = <NetworkConstructionFlags.WRL_BETA_MINUS: 2>
WRL_BETA_PLUS: NetworkConstructionFlags # value = <NetworkConstructionFlags.WRL_BETA_PLUS: 4>
WRL_ELECTRON_CAPTURE: NetworkConstructionFlags # value = <NetworkConstructionFlags.WRL_ELECTRON_CAPTURE: 8>
WRL_POSITRON_CAPTURE: NetworkConstructionFlags # value = <NetworkConstructionFlags.WRL_POSITRON_CAPTURE: 16>
WRL_WEAK: NetworkConstructionFlags # value = <NetworkConstructionFlags.WRL_WEAK: 30>

View File

@@ -1,16 +0,0 @@
"""
A submodule for engine diagnostics
"""
from __future__ import annotations
import collections.abc
import fourdst._phys.composition
import gridfire._gridfire.engine
import gridfire._gridfire.engine.scratchpads
import typing
__all__: list[str] = ['inspect_jacobian_stiffness', 'inspect_species_balance', 'report_limiting_species']
def inspect_jacobian_stiffness(ctx: gridfire._gridfire.engine.scratchpads.StateBlob, engine: gridfire._gridfire.engine.DynamicEngine, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, json: bool) -> ... | None:
...
def inspect_species_balance(ctx: gridfire._gridfire.engine.scratchpads.StateBlob, engine: gridfire._gridfire.engine.DynamicEngine, species_name: str, comp: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat, json: bool) -> ... | None:
...
def report_limiting_species(ctx: gridfire._gridfire.engine.scratchpads.StateBlob, engine: gridfire._gridfire.engine.DynamicEngine, Y_full: collections.abc.Sequence[typing.SupportsFloat], E_full: collections.abc.Sequence[typing.SupportsFloat], relTol: typing.SupportsFloat, absTol: typing.SupportsFloat, top_n: typing.SupportsInt, json: bool) -> ... | None:
...

View File

@@ -1,267 +0,0 @@
"""
Engine ScratchPad bindings
"""
from __future__ import annotations
import fourdst._phys.atomic
import fourdst._phys.composition
import gridfire._gridfire.reaction
import typing
__all__: list[str] = ['ADAPTIVE_ENGINE_VIEW_SCRATCHPAD', 'ADFunRegistrationResult', 'ALREADY_REGISTERED', 'AdaptiveEngineViewScratchPad', 'DEFINED_ENGINE_VIEW_SCRATCHPAD', 'DefinedEngineViewScratchPad', 'GRAPH_ENGINE_SCRATCHPAD', 'GraphEngineScratchPad', 'MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD', 'MultiscalePartitioningEngineViewScratchPad', 'SCRATCHPAD_BAD_CAST', 'SCRATCHPAD_NOT_FOUND', 'SCRATCHPAD_NOT_INITIALIZED', 'SCRATCHPAD_OUT_OF_BOUNDS', 'SCRATCHPAD_TYPE_COLLISION', 'SCRATCHPAD_UNKNOWN_ERROR', 'SUCCESS', 'ScratchPadType', 'StateBlob', 'StateBlobError']
class ADFunRegistrationResult:
"""
Members:
SUCCESS
ALREADY_REGISTERED
"""
ALREADY_REGISTERED: typing.ClassVar[ADFunRegistrationResult] # value = <ADFunRegistrationResult.ALREADY_REGISTERED: 1>
SUCCESS: typing.ClassVar[ADFunRegistrationResult] # value = <ADFunRegistrationResult.SUCCESS: 0>
__members__: typing.ClassVar[dict[str, ADFunRegistrationResult]] # value = {'SUCCESS': <ADFunRegistrationResult.SUCCESS: 0>, 'ALREADY_REGISTERED': <ADFunRegistrationResult.ALREADY_REGISTERED: 1>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class AdaptiveEngineViewScratchPad:
ID: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.ADAPTIVE_ENGINE_VIEW_SCRATCHPAD: 2>
def __init__(self) -> None:
...
def __repr__(self) -> str:
...
def clone(self) -> ...:
...
def initialize(self, arg0: ...) -> None:
...
def is_initialized(self) -> bool:
...
@property
def active_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
...
@property
def active_species(self) -> list[fourdst._phys.atomic.Species]:
...
@property
def has_initialized(self) -> bool:
...
class DefinedEngineViewScratchPad:
ID: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.DEFINED_ENGINE_VIEW_SCRATCHPAD: 3>
def __init__(self) -> None:
...
def __repr__(self) -> str:
...
def clone(self) -> ...:
...
def is_initialized(self) -> bool:
...
@property
def active_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
...
@property
def active_species(self) -> set[fourdst._phys.atomic.Species]:
...
@property
def has_initialized(self) -> bool:
...
@property
def reaction_index_map(self) -> list[int]:
...
@property
def species_index_map(self) -> list[int]:
...
class GraphEngineScratchPad:
ID: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.GRAPH_ENGINE_SCRATCHPAD: 0>
def __init__(self) -> None:
...
def __repr__(self) -> str:
...
def clone(self) -> ...:
...
def initialize(self, engine: ...) -> None:
...
def is_initialized(self) -> bool:
...
@property
def has_initialized(self) -> bool:
...
@property
def local_abundance_cache(self) -> list[float]:
...
@property
def most_recent_rhs_calculation(self) -> ... | None:
...
@property
def stepDerivativesCache(self) -> dict[int, ...]:
...
class MultiscalePartitioningEngineViewScratchPad:
ID: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD: 1>
def __init__(self) -> None:
...
def __repr__(self) -> str:
...
def clone(self) -> ...:
...
def initialize(self) -> None:
...
def is_initialized(self) -> bool:
...
@property
def algebraic_species(self) -> list[fourdst._phys.atomic.Species]:
...
@property
def composition_cache(self) -> dict[int, fourdst._phys.composition.Composition]:
...
@property
def dynamic_species(self) -> list[fourdst._phys.atomic.Species]:
...
@property
def has_initialized(self) -> bool:
...
@property
def qse_groups(self) -> list[...]:
...
class ScratchPadType:
"""
Members:
GRAPH_ENGINE_SCRATCHPAD
MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD
ADAPTIVE_ENGINE_VIEW_SCRATCHPAD
DEFINED_ENGINE_VIEW_SCRATCHPAD
"""
ADAPTIVE_ENGINE_VIEW_SCRATCHPAD: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.ADAPTIVE_ENGINE_VIEW_SCRATCHPAD: 2>
DEFINED_ENGINE_VIEW_SCRATCHPAD: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.DEFINED_ENGINE_VIEW_SCRATCHPAD: 3>
GRAPH_ENGINE_SCRATCHPAD: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.GRAPH_ENGINE_SCRATCHPAD: 0>
MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD: typing.ClassVar[ScratchPadType] # value = <ScratchPadType.MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD: 1>
__members__: typing.ClassVar[dict[str, ScratchPadType]] # value = {'GRAPH_ENGINE_SCRATCHPAD': <ScratchPadType.GRAPH_ENGINE_SCRATCHPAD: 0>, 'MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD': <ScratchPadType.MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD: 1>, 'ADAPTIVE_ENGINE_VIEW_SCRATCHPAD': <ScratchPadType.ADAPTIVE_ENGINE_VIEW_SCRATCHPAD: 2>, 'DEFINED_ENGINE_VIEW_SCRATCHPAD': <ScratchPadType.DEFINED_ENGINE_VIEW_SCRATCHPAD: 3>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class StateBlob:
@staticmethod
def error_to_string(arg0: StateBlobError) -> str:
...
def __init__(self) -> None:
...
def __repr__(self) -> str:
...
def clone_structure(self) -> StateBlob:
...
def enroll(self, arg0: ScratchPadType) -> None:
...
def get(self, arg0: ScratchPadType) -> ...:
...
def get_registered_scratchpads(self) -> set[ScratchPadType]:
...
def get_status(self, arg0: ScratchPadType) -> ...:
...
def get_status_map(self) -> dict[ScratchPadType, ...]:
...
class StateBlobError:
"""
Members:
SCRATCHPAD_OUT_OF_BOUNDS
SCRATCHPAD_NOT_FOUND
SCRATCHPAD_BAD_CAST
SCRATCHPAD_NOT_INITIALIZED
SCRATCHPAD_TYPE_COLLISION
SCRATCHPAD_UNKNOWN_ERROR
"""
SCRATCHPAD_BAD_CAST: typing.ClassVar[StateBlobError] # value = <StateBlobError.SCRATCHPAD_BAD_CAST: 1>
SCRATCHPAD_NOT_FOUND: typing.ClassVar[StateBlobError] # value = <StateBlobError.SCRATCHPAD_NOT_FOUND: 0>
SCRATCHPAD_NOT_INITIALIZED: typing.ClassVar[StateBlobError] # value = <StateBlobError.SCRATCHPAD_NOT_INITIALIZED: 2>
SCRATCHPAD_OUT_OF_BOUNDS: typing.ClassVar[StateBlobError] # value = <StateBlobError.SCRATCHPAD_OUT_OF_BOUNDS: 4>
SCRATCHPAD_TYPE_COLLISION: typing.ClassVar[StateBlobError] # value = <StateBlobError.SCRATCHPAD_TYPE_COLLISION: 3>
SCRATCHPAD_UNKNOWN_ERROR: typing.ClassVar[StateBlobError] # value = <StateBlobError.SCRATCHPAD_UNKNOWN_ERROR: 5>
__members__: typing.ClassVar[dict[str, StateBlobError]] # value = {'SCRATCHPAD_OUT_OF_BOUNDS': <StateBlobError.SCRATCHPAD_OUT_OF_BOUNDS: 4>, 'SCRATCHPAD_NOT_FOUND': <StateBlobError.SCRATCHPAD_NOT_FOUND: 0>, 'SCRATCHPAD_BAD_CAST': <StateBlobError.SCRATCHPAD_BAD_CAST: 1>, 'SCRATCHPAD_NOT_INITIALIZED': <StateBlobError.SCRATCHPAD_NOT_INITIALIZED: 2>, 'SCRATCHPAD_TYPE_COLLISION': <StateBlobError.SCRATCHPAD_TYPE_COLLISION: 3>, 'SCRATCHPAD_UNKNOWN_ERROR': <StateBlobError.SCRATCHPAD_UNKNOWN_ERROR: 5>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
ADAPTIVE_ENGINE_VIEW_SCRATCHPAD: ScratchPadType # value = <ScratchPadType.ADAPTIVE_ENGINE_VIEW_SCRATCHPAD: 2>
ALREADY_REGISTERED: ADFunRegistrationResult # value = <ADFunRegistrationResult.ALREADY_REGISTERED: 1>
DEFINED_ENGINE_VIEW_SCRATCHPAD: ScratchPadType # value = <ScratchPadType.DEFINED_ENGINE_VIEW_SCRATCHPAD: 3>
GRAPH_ENGINE_SCRATCHPAD: ScratchPadType # value = <ScratchPadType.GRAPH_ENGINE_SCRATCHPAD: 0>
MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD: ScratchPadType # value = <ScratchPadType.MULTISCALE_PARTITIONING_ENGINE_VIEW_SCRATCHPAD: 1>
SCRATCHPAD_BAD_CAST: StateBlobError # value = <StateBlobError.SCRATCHPAD_BAD_CAST: 1>
SCRATCHPAD_NOT_FOUND: StateBlobError # value = <StateBlobError.SCRATCHPAD_NOT_FOUND: 0>
SCRATCHPAD_NOT_INITIALIZED: StateBlobError # value = <StateBlobError.SCRATCHPAD_NOT_INITIALIZED: 2>
SCRATCHPAD_OUT_OF_BOUNDS: StateBlobError # value = <StateBlobError.SCRATCHPAD_OUT_OF_BOUNDS: 4>
SCRATCHPAD_TYPE_COLLISION: StateBlobError # value = <StateBlobError.SCRATCHPAD_TYPE_COLLISION: 3>
SCRATCHPAD_UNKNOWN_ERROR: StateBlobError # value = <StateBlobError.SCRATCHPAD_UNKNOWN_ERROR: 5>
SUCCESS: ADFunRegistrationResult # value = <ADFunRegistrationResult.SUCCESS: 0>

View File

@@ -1,61 +0,0 @@
"""
GridFire exceptions bindings
"""
from __future__ import annotations
__all__: list[str] = ['BadCollectionError', 'BadRHSEngineError', 'CVODESolverFailureError', 'DebugException', 'EngineError', 'FailedToPartitionEngineError', 'GridFireError', 'HashingError', 'IllConditionedJacobianError', 'InvalidQSESolutionError', 'JacobianError', 'KINSolSolverFailureError', 'MissingBaseReactionError', 'MissingKeyReactionError', 'MissingSeedSpeciesError', 'NetworkResizedError', 'PolicyError', 'ReactionError', 'ReactionParsingError', 'SUNDIALSError', 'ScratchPadError', 'SingularJacobianError', 'SolverError', 'StaleJacobianError', 'UnableToSetNetworkReactionsError', 'UninitializedJacobianError', 'UnknownJacobianError', 'UtilityError']
class BadCollectionError(EngineError):
pass
class BadRHSEngineError(EngineError):
pass
class CVODESolverFailureError(SUNDIALSError):
pass
class DebugException(GridFireError):
pass
class EngineError(GridFireError):
pass
class FailedToPartitionEngineError(EngineError):
pass
class GridFireError(Exception):
pass
class HashingError(UtilityError):
pass
class IllConditionedJacobianError(SolverError):
pass
class InvalidQSESolutionError(EngineError):
pass
class JacobianError(EngineError):
pass
class KINSolSolverFailureError(SUNDIALSError):
pass
class MissingBaseReactionError(PolicyError):
pass
class MissingKeyReactionError(PolicyError):
pass
class MissingSeedSpeciesError(PolicyError):
pass
class NetworkResizedError(EngineError):
pass
class PolicyError(GridFireError):
pass
class ReactionError(GridFireError):
pass
class ReactionParsingError(ReactionError):
pass
class SUNDIALSError(SolverError):
pass
class ScratchPadError(GridFireError):
pass
class SingularJacobianError(SolverError):
pass
class SolverError(GridFireError):
pass
class StaleJacobianError(JacobianError):
pass
class UnableToSetNetworkReactionsError(EngineError):
pass
class UninitializedJacobianError(JacobianError):
pass
class UnknownJacobianError(JacobianError):
pass
class UtilityError(GridFireError):
pass

View File

@@ -1,14 +0,0 @@
"""
GridFire io bindings
"""
from __future__ import annotations
__all__: list[str] = ['NetworkFileParser', 'ParsedNetworkData', 'SimpleReactionListFileParser']
class NetworkFileParser:
pass
class ParsedNetworkData:
pass
class SimpleReactionListFileParser(NetworkFileParser):
def parse(self, filename: str) -> ParsedNetworkData:
"""
Parse a simple reaction list file and return a ParsedNetworkData object.
"""

View File

@@ -1,142 +0,0 @@
"""
GridFire partition function bindings
"""
from __future__ import annotations
import collections.abc
import typing
__all__: list[str] = ['BasePartitionType', 'CompositePartitionFunction', 'GroundState', 'GroundStatePartitionFunction', 'PartitionFunction', 'RauscherThielemann', 'RauscherThielemannPartitionDataRecord', 'RauscherThielemannPartitionFunction', 'basePartitionTypeToString', 'stringToBasePartitionType']
class BasePartitionType:
"""
Members:
RauscherThielemann
GroundState
"""
GroundState: typing.ClassVar[BasePartitionType] # value = <BasePartitionType.GroundState: 1>
RauscherThielemann: typing.ClassVar[BasePartitionType] # value = <BasePartitionType.RauscherThielemann: 0>
__members__: typing.ClassVar[dict[str, BasePartitionType]] # value = {'RauscherThielemann': <BasePartitionType.RauscherThielemann: 0>, 'GroundState': <BasePartitionType.GroundState: 1>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class CompositePartitionFunction:
@typing.overload
def __init__(self, partitionFunctions: collections.abc.Sequence[BasePartitionType]) -> None:
"""
Create a composite partition function from a list of base partition types.
"""
@typing.overload
def __init__(self, arg0: CompositePartitionFunction) -> None:
"""
Copy constructor for CompositePartitionFunction.
"""
def evaluate(self, z: typing.SupportsInt, a: typing.SupportsInt, T9: typing.SupportsFloat) -> float:
"""
Evaluate the composite partition function for given Z, A, and T9.
"""
def evaluateDerivative(self, z: typing.SupportsInt, a: typing.SupportsInt, T9: typing.SupportsFloat) -> float:
"""
Evaluate the derivative of the composite partition function for given Z, A, and T9.
"""
def get_type(self) -> str:
"""
Get the type of the partition function (should return 'Composite').
"""
def supports(self, z: typing.SupportsInt, a: typing.SupportsInt) -> bool:
"""
Check if the composite partition function supports given Z and A.
"""
class GroundStatePartitionFunction(PartitionFunction):
def __init__(self) -> None:
...
def evaluate(self, z: typing.SupportsInt, a: typing.SupportsInt, T9: typing.SupportsFloat) -> float:
"""
Evaluate the ground state partition function for given Z, A, and T9.
"""
def evaluateDerivative(self, z: typing.SupportsInt, a: typing.SupportsInt, T9: typing.SupportsFloat) -> float:
"""
Evaluate the derivative of the ground state partition function for given Z, A, and T9.
"""
def get_type(self) -> str:
"""
Get the type of the partition function (should return 'GroundState').
"""
def supports(self, z: typing.SupportsInt, a: typing.SupportsInt) -> bool:
"""
Check if the ground state partition function supports given Z and A.
"""
class PartitionFunction:
pass
class RauscherThielemannPartitionDataRecord:
@property
def a(self) -> int:
"""
Mass number
"""
@property
def ground_state_spin(self) -> float:
"""
Ground state spin
"""
@property
def normalized_g_values(self) -> float:
"""
Normalized g-values for the first 24 energy levels
"""
@property
def z(self) -> int:
"""
Atomic number
"""
class RauscherThielemannPartitionFunction(PartitionFunction):
def __init__(self) -> None:
...
def evaluate(self, z: typing.SupportsInt, a: typing.SupportsInt, T9: typing.SupportsFloat) -> float:
"""
Evaluate the Rauscher-Thielemann partition function for given Z, A, and T9.
"""
def evaluateDerivative(self, z: typing.SupportsInt, a: typing.SupportsInt, T9: typing.SupportsFloat) -> float:
"""
Evaluate the derivative of the Rauscher-Thielemann partition function for given Z, A, and T9.
"""
def get_type(self) -> str:
"""
Get the type of the partition function (should return 'RauscherThielemann').
"""
def supports(self, z: typing.SupportsInt, a: typing.SupportsInt) -> bool:
"""
Check if the Rauscher-Thielemann partition function supports given Z and A.
"""
def basePartitionTypeToString(type: BasePartitionType) -> str:
"""
Convert BasePartitionType to string.
"""
def stringToBasePartitionType(typeStr: str) -> BasePartitionType:
"""
Convert string to BasePartitionType.
"""
GroundState: BasePartitionType # value = <BasePartitionType.GroundState: 1>
RauscherThielemann: BasePartitionType # value = <BasePartitionType.RauscherThielemann: 0>

View File

@@ -1,769 +0,0 @@
"""
GridFire network policy bindings
"""
from __future__ import annotations
import collections.abc
import fourdst._phys.atomic
import fourdst._phys.composition
import gridfire._gridfire.engine
import gridfire._gridfire.engine.scratchpads
import gridfire._gridfire.partition
import gridfire._gridfire.reaction
import typing
__all__: list[str] = ['CNOChainPolicy', 'CNOIChainPolicy', 'CNOIIChainPolicy', 'CNOIIIChainPolicy', 'CNOIVChainPolicy', 'ConstructionResults', 'HotCNOChainPolicy', 'HotCNOIChainPolicy', 'HotCNOIIChainPolicy', 'HotCNOIIIChainPolicy', 'INITIALIZED_UNVERIFIED', 'INITIALIZED_VERIFIED', 'MISSING_KEY_REACTION', 'MISSING_KEY_SPECIES', 'MainSequencePolicy', 'MainSequenceReactionChainPolicy', 'MultiReactionChainPolicy', 'NetworkPolicy', 'NetworkPolicyStatus', 'ProtonProtonChainPolicy', 'ProtonProtonIChainPolicy', 'ProtonProtonIIChainPolicy', 'ProtonProtonIIIChainPolicy', 'ReactionChainPolicy', 'TemperatureDependentChainPolicy', 'TripleAlphaChainPolicy', 'UNINITIALIZED', 'network_policy_status_to_string']
class CNOChainPolicy(MultiReactionChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class CNOIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class CNOIIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class CNOIIIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class CNOIVChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class ConstructionResults:
@property
def engine(self) -> gridfire._gridfire.engine.DynamicEngine:
...
@property
def scratch_blob(self) -> gridfire._gridfire.engine.scratchpads.StateBlob:
...
class HotCNOChainPolicy(MultiReactionChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class HotCNOIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class HotCNOIIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class HotCNOIIIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class MainSequencePolicy(NetworkPolicy):
@typing.overload
def __init__(self, composition: fourdst._phys.composition.Composition) -> None:
"""
Construct MainSequencePolicy from an existing composition.
"""
@typing.overload
def __init__(self, seed_species: collections.abc.Sequence[fourdst._phys.atomic.Species], mass_fractions: collections.abc.Sequence[typing.SupportsFloat]) -> None:
"""
Construct MainSequencePolicy from seed species and mass fractions.
"""
def construct(self) -> ConstructionResults:
"""
Construct the network according to the policy.
"""
def get_engine_stack(self) -> list[gridfire._gridfire.engine.DynamicEngine]:
...
def get_engine_types_stack(self) -> list[gridfire._gridfire.engine.EngineTypes]:
"""
Get the types of engines in the stack constructed by the network policy.
"""
def get_partition_function(self) -> gridfire._gridfire.partition.PartitionFunction:
...
def get_seed_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the set of seed reactions required by the network policy.
"""
def get_seed_species(self) -> set[fourdst._phys.atomic.Species]:
"""
Get the set of seed species required by the network policy.
"""
def get_stack_scratch_blob(self) -> gridfire._gridfire.engine.scratchpads.StateBlob:
...
def get_status(self) -> NetworkPolicyStatus:
"""
Get the current status of the network policy.
"""
def name(self) -> str:
"""
Get the name of the network policy.
"""
class MainSequenceReactionChainPolicy(MultiReactionChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class MultiReactionChainPolicy(ReactionChainPolicy):
pass
class NetworkPolicy:
pass
class NetworkPolicyStatus:
"""
Members:
UNINITIALIZED
INITIALIZED_UNVERIFIED
MISSING_KEY_REACTION
MISSING_KEY_SPECIES
INITIALIZED_VERIFIED
"""
INITIALIZED_UNVERIFIED: typing.ClassVar[NetworkPolicyStatus] # value = <NetworkPolicyStatus.INITIALIZED_UNVERIFIED: 1>
INITIALIZED_VERIFIED: typing.ClassVar[NetworkPolicyStatus] # value = <NetworkPolicyStatus.INITIALIZED_VERIFIED: 4>
MISSING_KEY_REACTION: typing.ClassVar[NetworkPolicyStatus] # value = <NetworkPolicyStatus.MISSING_KEY_REACTION: 2>
MISSING_KEY_SPECIES: typing.ClassVar[NetworkPolicyStatus] # value = <NetworkPolicyStatus.MISSING_KEY_SPECIES: 3>
UNINITIALIZED: typing.ClassVar[NetworkPolicyStatus] # value = <NetworkPolicyStatus.UNINITIALIZED: 0>
__members__: typing.ClassVar[dict[str, NetworkPolicyStatus]] # value = {'UNINITIALIZED': <NetworkPolicyStatus.UNINITIALIZED: 0>, 'INITIALIZED_UNVERIFIED': <NetworkPolicyStatus.INITIALIZED_UNVERIFIED: 1>, 'MISSING_KEY_REACTION': <NetworkPolicyStatus.MISSING_KEY_REACTION: 2>, 'MISSING_KEY_SPECIES': <NetworkPolicyStatus.MISSING_KEY_SPECIES: 3>, 'INITIALIZED_VERIFIED': <NetworkPolicyStatus.INITIALIZED_VERIFIED: 4>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class ProtonProtonChainPolicy(MultiReactionChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class ProtonProtonIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class ProtonProtonIIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class ProtonProtonIIIChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
class ReactionChainPolicy:
pass
class TemperatureDependentChainPolicy(ReactionChainPolicy):
pass
class TripleAlphaChainPolicy(TemperatureDependentChainPolicy):
def __eq__(self, other: ReactionChainPolicy) -> bool:
"""
Check equality with another ReactionChainPolicy.
"""
def __hash__(self) -> int:
...
def __init__(self) -> None:
...
def __ne__(self, other: ReactionChainPolicy) -> bool:
"""
Check inequality with another ReactionChainPolicy.
"""
def __repr__(self) -> str:
...
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the reaction chain contains a reaction with the given ID.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the reaction chain contains the given reaction.
"""
def get_reactions(self) -> gridfire._gridfire.reaction.ReactionSet:
"""
Get the ReactionSet representing this reaction chain.
"""
def hash(self, seed: typing.SupportsInt) -> int:
"""
Compute a hash value for the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
@typing.overload
def name(self) -> str:
"""
Get the name of the reaction chain policy.
"""
def network_policy_status_to_string(status: NetworkPolicyStatus) -> str:
"""
Convert a NetworkPolicyStatus enum value to its string representation.
"""
INITIALIZED_UNVERIFIED: NetworkPolicyStatus # value = <NetworkPolicyStatus.INITIALIZED_UNVERIFIED: 1>
INITIALIZED_VERIFIED: NetworkPolicyStatus # value = <NetworkPolicyStatus.INITIALIZED_VERIFIED: 4>
MISSING_KEY_REACTION: NetworkPolicyStatus # value = <NetworkPolicyStatus.MISSING_KEY_REACTION: 2>
MISSING_KEY_SPECIES: NetworkPolicyStatus # value = <NetworkPolicyStatus.MISSING_KEY_SPECIES: 3>
UNINITIALIZED: NetworkPolicyStatus # value = <NetworkPolicyStatus.UNINITIALIZED: 0>

View File

@@ -1,249 +0,0 @@
"""
GridFire reaction bindings
"""
from __future__ import annotations
import collections.abc
import fourdst._phys.atomic
import fourdst._phys.composition
import typing
__all__: list[str] = ['LogicalReaclibReaction', 'RateCoefficientSet', 'ReaclibReaction', 'ReactionSet', 'get_all_reactions', 'packReactionSet']
class LogicalReaclibReaction(ReaclibReaction):
@typing.overload
def __init__(self, reactions: collections.abc.Sequence[ReaclibReaction]) -> None:
"""
Construct a LogicalReaclibReaction from a vector of ReaclibReaction objects.
"""
@typing.overload
def __init__(self, reactions: collections.abc.Sequence[ReaclibReaction], is_reverse: bool) -> None:
"""
Construct a LogicalReaclibReaction from a vector of ReaclibReaction objects.
"""
def __len__(self) -> int:
"""
Overload len() to return the number of source rates.
"""
def add_reaction(self, reaction: ReaclibReaction) -> None:
"""
Add another Reaction source to this logical reaction.
"""
def calculate_forward_rate_log_derivative(self, T9: typing.SupportsFloat, rho: typing.SupportsFloat, Ye: typing.SupportsFloat, mue: typing.SupportsFloat, Composition: fourdst._phys.composition.Composition) -> float:
"""
Calculate the forward rate log derivative at a given temperature T9 (in units of 10^9 K).
"""
def calculate_rate(self, T9: typing.SupportsFloat, rho: typing.SupportsFloat, Ye: typing.SupportsFloat, mue: typing.SupportsFloat, Y: collections.abc.Sequence[typing.SupportsFloat], index_to_species_map: collections.abc.Mapping[typing.SupportsInt, fourdst._phys.atomic.Species]) -> float:
"""
Calculate the reaction rate at a given temperature T9 (in units of 10^9 K). Note that for a reaclib reaction only T9 is actually used, all other parameters are there for interface compatibility.
"""
def size(self) -> int:
"""
Get the number of source rates contributing to this logical reaction.
"""
def sources(self) -> list[str]:
"""
Get the list of source labels for the aggregated rates.
"""
class RateCoefficientSet:
def __init__(self, a0: typing.SupportsFloat, a1: typing.SupportsFloat, a2: typing.SupportsFloat, a3: typing.SupportsFloat, a4: typing.SupportsFloat, a5: typing.SupportsFloat, a6: typing.SupportsFloat) -> None:
"""
Construct a RateCoefficientSet with the given parameters.
"""
class ReaclibReaction:
__hash__: typing.ClassVar[None] = None
def __eq__(self, arg0: ReaclibReaction) -> bool:
"""
Equality operator for reactions based on their IDs.
"""
def __init__(self, id: str, peName: str, chapter: typing.SupportsInt, reactants: collections.abc.Sequence[fourdst._phys.atomic.Species], products: collections.abc.Sequence[fourdst._phys.atomic.Species], qValue: typing.SupportsFloat, label: str, sets: RateCoefficientSet, reverse: bool = False) -> None:
"""
Construct a Reaction with the given parameters.
"""
def __neq__(self, arg0: ReaclibReaction) -> bool:
"""
Inequality operator for reactions based on their IDs.
"""
def __repr__(self) -> str:
...
def all_species(self) -> set[fourdst._phys.atomic.Species]:
"""
Get all species involved in the reaction (both reactants and products) as a set.
"""
def calculate_rate(self, T9: typing.SupportsFloat, rho: typing.SupportsFloat, Y: collections.abc.Sequence[typing.SupportsFloat]) -> float:
"""
Calculate the reaction rate at a given temperature T9 (in units of 10^9 K).
"""
def chapter(self) -> int:
"""
Get the REACLIB chapter number defining the reaction structure.
"""
def contains(self, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if the reaction contains a specific species.
"""
def contains_product(self, arg0: fourdst._phys.atomic.Species) -> bool:
"""
Check if the reaction contains a specific product species.
"""
def contains_reactant(self, arg0: fourdst._phys.atomic.Species) -> bool:
"""
Check if the reaction contains a specific reactant species.
"""
def excess_energy(self) -> float:
"""
Calculate the excess energy from the mass difference of reactants and products.
"""
def hash(self, seed: typing.SupportsInt = 0) -> int:
"""
Compute a hash for the reaction based on its ID.
"""
def id(self) -> str:
"""
Get the unique identifier of the reaction.
"""
def is_reverse(self) -> bool:
"""
Check if this is a reverse reaction rate.
"""
def num_species(self) -> int:
"""
Count the number of species in the reaction.
"""
def peName(self) -> str:
"""
Get the reaction name in (projectile, ejectile) notation (e.g., 'p(p,g)d').
"""
def product_species(self) -> set[fourdst._phys.atomic.Species]:
"""
Get the product species of the reaction as a set.
"""
def products(self) -> list[fourdst._phys.atomic.Species]:
"""
Get a list of product species in the reaction.
"""
def qValue(self) -> float:
"""
Get the Q-value of the reaction in MeV.
"""
def rateCoefficients(self) -> RateCoefficientSet:
"""
get the set of rate coefficients.
"""
def reactant_species(self) -> set[fourdst._phys.atomic.Species]:
"""
Get the reactant species of the reaction as a set.
"""
def reactants(self) -> list[fourdst._phys.atomic.Species]:
"""
Get a list of reactant species in the reaction.
"""
def sourceLabel(self) -> str:
"""
Get the source label for the rate data (e.g., 'wc12w', 'st08').
"""
@typing.overload
def stoichiometry(self, species: fourdst._phys.atomic.Species) -> int:
"""
Get the stoichiometry of the reaction as a map from species to their coefficients.
"""
@typing.overload
def stoichiometry(self) -> dict[fourdst._phys.atomic.Species, int]:
"""
Get the stoichiometry of the reaction as a map from species to their coefficients.
"""
class ReactionSet:
__hash__: typing.ClassVar[None] = None
@staticmethod
def from_clones(reactions: collections.abc.Sequence[...]) -> ReactionSet:
"""
Create a ReactionSet that takes ownership of the reactions by cloning the input reactions.
"""
def __eq__(self, LogicalReactionSet: ReactionSet) -> bool:
"""
Equality operator for LogicalReactionSets based on their contents.
"""
def __getitem__(self, index: typing.SupportsInt) -> ...:
"""
Get a LogicalReaclibReaction by index.
"""
def __getitem___(self, id: str) -> ...:
"""
Get a LogicalReaclibReaction by its ID.
"""
@typing.overload
def __init__(self, reactions: collections.abc.Sequence[...]) -> None:
"""
Construct a LogicalReactionSet from a vector of LogicalReaclibReaction objects.
"""
@typing.overload
def __init__(self) -> None:
"""
Default constructor for an empty LogicalReactionSet.
"""
@typing.overload
def __init__(self, other: ReactionSet) -> None:
"""
Copy constructor for LogicalReactionSet.
"""
def __len__(self) -> int:
"""
Overload len() to return the number of LogicalReactions.
"""
def __ne__(self, LogicalReactionSet: ReactionSet) -> bool:
"""
Inequality operator for LogicalReactionSets based on their contents.
"""
def __repr__(self) -> str:
...
def add_reaction(self, reaction: ...) -> None:
"""
Add a LogicalReaclibReaction to the set.
"""
def clear(self) -> None:
"""
Remove all LogicalReactions from the set.
"""
@typing.overload
def contains(self, id: str) -> bool:
"""
Check if the set contains a specific LogicalReaclibReaction.
"""
@typing.overload
def contains(self, reaction: ...) -> bool:
"""
Check if the set contains a specific Reaction.
"""
def contains_product(self, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if any reaction in the set has the species as a product.
"""
def contains_reactant(self, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if any reaction in the set has the species as a reactant.
"""
def contains_species(self, species: fourdst._phys.atomic.Species) -> bool:
"""
Check if any reaction in the set involves the given species.
"""
def getReactionSetSpecies(self) -> set[fourdst._phys.atomic.Species]:
"""
Get all species involved in the reactions of the set as a set of Species objects.
"""
def hash(self, seed: typing.SupportsInt = 0) -> int:
"""
Compute a hash for the LogicalReactionSet based on its contents.
"""
def remove_reaction(self, reaction: ...) -> None:
"""
Remove a LogicalReaclibReaction from the set.
"""
def size(self) -> int:
"""
Get the number of LogicalReactions in the set.
"""
def get_all_reactions() -> ReactionSet:
"""
Get all reactions from the REACLIB database.
"""
def packReactionSet(reactionSet: ReactionSet) -> ReactionSet:
"""
Convert a ReactionSet to a LogicalReactionSet by aggregating reactions with the same peName.
"""

View File

@@ -1,68 +0,0 @@
"""
GridFire plasma screening bindings
"""
from __future__ import annotations
import collections.abc
import fourdst._phys.atomic
import gridfire._gridfire.reaction
import typing
__all__: list[str] = ['BARE', 'BareScreeningModel', 'ScreeningModel', 'ScreeningType', 'WEAK', 'WeakScreeningModel', 'selectScreeningModel']
class BareScreeningModel:
def __init__(self) -> None:
...
def calculateScreeningFactors(self, reactions: gridfire._gridfire.reaction.ReactionSet, species: collections.abc.Sequence[fourdst._phys.atomic.Species], Y: collections.abc.Sequence[typing.SupportsFloat], T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> list[float]:
"""
Calculate the bare plasma screening factors. This always returns 1.0 (bare)
"""
class ScreeningModel:
pass
class ScreeningType:
"""
Members:
BARE
WEAK
"""
BARE: typing.ClassVar[ScreeningType] # value = <ScreeningType.BARE: 0>
WEAK: typing.ClassVar[ScreeningType] # value = <ScreeningType.WEAK: 1>
__members__: typing.ClassVar[dict[str, ScreeningType]] # value = {'BARE': <ScreeningType.BARE: 0>, 'WEAK': <ScreeningType.WEAK: 1>}
def __eq__(self, other: typing.Any) -> bool:
...
def __getstate__(self) -> int:
...
def __hash__(self) -> int:
...
def __index__(self) -> int:
...
def __init__(self, value: typing.SupportsInt) -> None:
...
def __int__(self) -> int:
...
def __ne__(self, other: typing.Any) -> bool:
...
def __repr__(self) -> str:
...
def __setstate__(self, state: typing.SupportsInt) -> None:
...
def __str__(self) -> str:
...
@property
def name(self) -> str:
...
@property
def value(self) -> int:
...
class WeakScreeningModel:
def __init__(self) -> None:
...
def calculateScreeningFactors(self, reactions: gridfire._gridfire.reaction.ReactionSet, species: collections.abc.Sequence[fourdst._phys.atomic.Species], Y: collections.abc.Sequence[typing.SupportsFloat], T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> list[float]:
"""
Calculate the weak plasma screening factors using the Salpeter (1954) model.
"""
def selectScreeningModel(type: ScreeningType) -> ScreeningModel:
"""
Select a screening model based on the specified type. Returns a pointer to the selected model.
"""
BARE: ScreeningType # value = <ScreeningType.BARE: 0>
WEAK: ScreeningType # value = <ScreeningType.WEAK: 1>

View File

@@ -1,157 +0,0 @@
"""
GridFire numerical solver bindings
"""
from __future__ import annotations
import collections.abc
import fourdst._phys.atomic
import gridfire._gridfire.engine
import gridfire._gridfire.engine.scratchpads
import gridfire._gridfire.type
import types
import typing
__all__: list[str] = ['GridSolver', 'GridSolverContext', 'MultiZoneDynamicNetworkSolver', 'PointSolver', 'PointSolverContext', 'PointSolverTimestepContext', 'SingleZoneDynamicNetworkSolver', 'SolverContextBase']
class GridSolver(MultiZoneDynamicNetworkSolver):
def __init__(self, engine: gridfire._gridfire.engine.DynamicEngine, solver: SingleZoneDynamicNetworkSolver) -> None:
"""
Initialize the GridSolver object.
"""
def evaluate(self, solver_ctx: SolverContextBase, netIns: collections.abc.Sequence[gridfire._gridfire.type.NetIn]) -> list[gridfire._gridfire.type.NetOut]:
"""
evaluate the dynamic engine using the dynamic engine class
"""
class GridSolverContext(SolverContextBase):
detailed_logging: bool
stdout_logging: bool
zone_completion_logging: bool
def __init__(self, ctx_template: gridfire._gridfire.engine.scratchpads.StateBlob) -> None:
...
@typing.overload
def clear_callback(self) -> None:
...
@typing.overload
def clear_callback(self, zone_idx: typing.SupportsInt) -> None:
...
def init(self) -> None:
...
def reset(self) -> None:
...
@typing.overload
def set_callback(self, callback: collections.abc.Callable[[...], None]) -> None:
...
@typing.overload
def set_callback(self, callback: collections.abc.Callable[[...], None], zone_idx: typing.SupportsInt) -> None:
...
class MultiZoneDynamicNetworkSolver:
def evaluate(self, solver_ctx: SolverContextBase, netIns: collections.abc.Sequence[gridfire._gridfire.type.NetIn]) -> list[gridfire._gridfire.type.NetOut]:
"""
evaluate the dynamic engine using the dynamic engine class for multiple zones (using openmp if available)
"""
class PointSolver(SingleZoneDynamicNetworkSolver):
def __init__(self, engine: gridfire._gridfire.engine.DynamicEngine) -> None:
"""
Initialize the PointSolver object.
"""
def evaluate(self, solver_ctx: SolverContextBase, netIn: gridfire._gridfire.type.NetIn, display_trigger: bool = False, force_reinitialization: bool = False) -> gridfire._gridfire.type.NetOut:
"""
evaluate the dynamic engine using the dynamic engine class
"""
class PointSolverContext:
callback: collections.abc.Callable[[PointSolverTimestepContext], None] | None
detailed_logging: bool
stdout_logging: bool
def __init__(self, engine_ctx: gridfire._gridfire.engine.scratchpads.StateBlob) -> None:
...
def clear_context(self) -> None:
...
def has_context(self) -> bool:
...
def init(self) -> None:
...
def init_context(self) -> None:
...
def reset_all(self) -> None:
...
def reset_cvode(self) -> None:
...
def reset_user(self) -> None:
...
@property
def J(self) -> _generic_SUNMatrix:
...
@property
def LS(self) -> _generic_SUNLinearSolver:
...
@property
def Y(self) -> _generic_N_Vector:
...
@property
def YErr(self) -> _generic_N_Vector:
...
@property
def abs_tol(self) -> float:
...
@abs_tol.setter
def abs_tol(self, arg1: typing.SupportsFloat) -> None:
...
@property
def cvode_mem(self) -> types.CapsuleType:
...
@property
def engine_ctx(self) -> gridfire._gridfire.engine.scratchpads.StateBlob:
...
@property
def num_steps(self) -> int:
...
@property
def rel_tol(self) -> float:
...
@rel_tol.setter
def rel_tol(self, arg1: typing.SupportsFloat) -> None:
...
@property
def sun_ctx(self) -> SUNContext_:
...
class PointSolverTimestepContext:
@property
def T9(self) -> float:
...
@property
def currentConvergenceFailures(self) -> int:
...
@property
def currentNonlinearIterations(self) -> int:
...
@property
def dt(self) -> float:
...
@property
def engine(self) -> gridfire._gridfire.engine.DynamicEngine:
...
@property
def last_step_time(self) -> float:
...
@property
def networkSpecies(self) -> list[fourdst._phys.atomic.Species]:
...
@property
def num_steps(self) -> int:
...
@property
def rho(self) -> float:
...
@property
def state(self) -> list[float]:
...
@property
def state_ctx(self) -> gridfire._gridfire.engine.scratchpads.StateBlob:
...
@property
def t(self) -> float:
...
class SingleZoneDynamicNetworkSolver:
def evaluate(self, solver_ctx: SolverContextBase, netIn: gridfire._gridfire.type.NetIn) -> gridfire._gridfire.type.NetOut:
"""
evaluate the dynamic engine using the dynamic engine class for a single zone
"""
class SolverContextBase:
pass

View File

@@ -1,67 +0,0 @@
"""
GridFire type bindings
"""
from __future__ import annotations
import fourdst._phys.composition
import typing
__all__: list[str] = ['NetIn', 'NetOut']
class NetIn:
composition: fourdst._phys.composition.Composition
def __init__(self) -> None:
...
def __repr__(self) -> str:
...
@property
def density(self) -> float:
...
@density.setter
def density(self, arg0: typing.SupportsFloat) -> None:
...
@property
def dt0(self) -> float:
...
@dt0.setter
def dt0(self, arg0: typing.SupportsFloat) -> None:
...
@property
def energy(self) -> float:
...
@energy.setter
def energy(self, arg0: typing.SupportsFloat) -> None:
...
@property
def tMax(self) -> float:
...
@tMax.setter
def tMax(self, arg0: typing.SupportsFloat) -> None:
...
@property
def temperature(self) -> float:
...
@temperature.setter
def temperature(self, arg0: typing.SupportsFloat) -> None:
...
class NetOut:
def __repr__(self) -> str:
...
@property
def composition(self) -> fourdst._phys.composition.Composition:
...
@property
def dEps_dRho(self) -> float:
...
@property
def dEps_dT(self) -> float:
...
@property
def energy(self) -> float:
...
@property
def num_steps(self) -> int:
...
@property
def specific_neutrino_energy_loss(self) -> float:
...
@property
def specific_neutrino_flux(self) -> float:
...

View File

@@ -1,18 +0,0 @@
"""
GridFire utility method bindings
"""
from __future__ import annotations
import fourdst._phys.composition
import gridfire._gridfire.engine
import gridfire._gridfire.engine.scratchpads
import typing
from . import hashing
__all__: list[str] = ['formatNuclearTimescaleLogString', 'hash_atomic', 'hash_reaction', 'hashing']
def formatNuclearTimescaleLogString(ctx: gridfire._gridfire.engine.scratchpads.StateBlob, engine: gridfire._gridfire.engine.DynamicEngine, Y: fourdst._phys.composition.Composition, T9: typing.SupportsFloat, rho: typing.SupportsFloat) -> str:
"""
Format a string for logging nuclear timescales based on temperature, density, and energy generation rate.
"""
def hash_atomic(a: typing.SupportsInt, z: typing.SupportsInt) -> int:
...
def hash_reaction(reaction: ...) -> int:
...

View File

@@ -1,6 +0,0 @@
"""
module for gridfire hashing functions
"""
from __future__ import annotations
from . import reaction
__all__: list[str] = ['reaction']

View File

@@ -1,12 +0,0 @@
"""
utility module for hashing gridfire reaction functions
"""
from __future__ import annotations
import typing
__all__: list[str] = ['mix_species', 'multiset_combine', 'splitmix64']
def mix_species(a: typing.SupportsInt, z: typing.SupportsInt) -> int:
...
def multiset_combine(acc: typing.SupportsInt, x: typing.SupportsInt) -> int:
...
def splitmix64(x: typing.SupportsInt) -> int:
...

View File

@@ -1,4 +1,4 @@
[wrap-git]
url = https://github.com/4D-STAR/fourdst
revision = v0.9.19
revision = v0.9.21
depth = 1

View File

@@ -5,10 +5,8 @@
#include <thread>
#include <format>
#include "gridfire/gridfire.h"
#include <cppad/utility/thread_alloc.hpp> // Required for parallel_setup
#include "fourdst/composition/composition.h"
#include "fourdst/logging/logging.h"
#include "fourdst/atomic/species.h"
#include "fourdst/composition/utils.h"
@@ -19,12 +17,16 @@
#include <clocale>
#include "gridfire/gridfire.h"
#include "fourdst/composition/composition.h"
#include "gridfire/utils/gf_omp.h"
#include <atomic>
#include <new>
#include <cstdlib>
static std::terminate_handler g_previousHandler = nullptr;
static std::vector<std::pair<double, std::unordered_map<std::string, std::pair<double, double>>>> g_callbackHistory;
static bool s_wrote_abundance_history = false;
void quill_terminate_handler();
gridfire::NetIn init(const double temp, const double rho, const double tMax) {
@@ -52,171 +54,9 @@ gridfire::NetIn init(const double temp, const double rho, const double tMax) {
return netIn;
}
void log_results(const gridfire::NetOut& netOut, const gridfire::NetIn& netIn) {
std::vector<fourdst::atomic::Species> logSpecies = {
fourdst::atomic::H_1,
fourdst::atomic::He_3,
fourdst::atomic::He_4,
fourdst::atomic::C_12,
fourdst::atomic::N_14,
fourdst::atomic::O_16,
fourdst::atomic::Ne_20,
fourdst::atomic::Mg_24
};
std::vector<double> initial;
std::vector<double> final;
std::vector<double> delta;
std::vector<double> fractional;
for (const auto& species : logSpecies) {
double initial_X = netIn.composition.getMassFraction(species);
double final_X = netOut.composition.getMassFraction(species);
double delta_X = final_X - initial_X;
double fractionalChange = (delta_X) / initial_X * 100.0;
initial.push_back(initial_X);
final.push_back(final_X);
delta.push_back(delta_X);
fractional.push_back(fractionalChange);
}
initial.push_back(0.0); // Placeholder for energy
final.push_back(netOut.energy);
delta.push_back(netOut.energy);
fractional.push_back(0.0); // Placeholder for energy
initial.push_back(0.0);
final.push_back(netOut.dEps_dT);
delta.push_back(netOut.dEps_dT);
fractional.push_back(0.0);
initial.push_back(0.0);
final.push_back(netOut.dEps_dRho);
delta.push_back(netOut.dEps_dRho);
fractional.push_back(0.0);
initial.push_back(0.0);
final.push_back(netOut.specific_neutrino_energy_loss);
delta.push_back(netOut.specific_neutrino_energy_loss);
fractional.push_back(0.0);
initial.push_back(0.0);
final.push_back(netOut.specific_neutrino_flux);
delta.push_back(netOut.specific_neutrino_flux);
fractional.push_back(0.0);
initial.push_back(netIn.composition.getMeanParticleMass());
final.push_back(netOut.composition.getMeanParticleMass());
delta.push_back(final.back() - initial.back());
fractional.push_back((final.back() - initial.back()) / initial.back() * 100.0);
std::vector<std::string> rowLabels = [&]() -> std::vector<std::string> {
std::vector<std::string> labels;
for (const auto& species : logSpecies) {
labels.emplace_back(species.name());
}
labels.emplace_back("ε");
labels.emplace_back("dε/dT");
labels.emplace_back("dε/dρ");
labels.emplace_back("Eν");
labels.emplace_back("Fν");
labels.emplace_back("<μ>");
return labels;
}();
gridfire::utils::Column<std::string> paramCol("Parameter", rowLabels);
gridfire::utils::Column<double> initialCol("Initial", initial);
gridfire::utils::Column<double> finalCol ("Final", final);
gridfire::utils::Column<double> deltaCol ("δ", delta);
gridfire::utils::Column<double> percentCol("% Change", fractional);
std::vector<std::unique_ptr<gridfire::utils::ColumnBase>> columns;
columns.push_back(std::make_unique<gridfire::utils::Column<std::string>>(paramCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(initialCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(finalCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(deltaCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(percentCol));
gridfire::utils::print_table("Simulation Results", columns);
}
void record_abundance_history_callback(const gridfire::solver::PointSolverTimestepContext& ctx) {
s_wrote_abundance_history = true;
const auto& engine = ctx.engine;
// std::unordered_map<std::string, std::pair<double, double>> abundances;
std::vector<double> Y;
for (const auto& species : engine.getNetworkSpecies(ctx.state_ctx)) {
const size_t sid = engine.getSpeciesIndex(ctx.state_ctx, species);
double y = N_VGetArrayPointer(ctx.state)[sid];
Y.push_back(y > 0.0 ? y : 0.0); // Regularize tiny negative abundances to zero
}
fourdst::composition::Composition comp(engine.getNetworkSpecies(ctx.state_ctx), Y);
std::unordered_map<std::string, std::pair<double, double>> abundances;
for (const auto& sp : comp | std::views::keys) {
abundances.emplace(std::string(sp.name()), std::make_pair(sp.mass(), comp.getMolarAbundance(sp)));
}
g_callbackHistory.emplace_back(ctx.t, abundances);
}
void save_callback_data(const std::string_view filename) {
std::set<std::string> unique_species;
for (const auto &abundances: g_callbackHistory | std::views::values) {
for (const auto &species_name: abundances | std::views::keys) {
unique_species.insert(species_name);
}
}
std::ofstream csvFile(filename.data(), std::ios::out);
csvFile << "t,";
size_t i = 0;
for (const auto& species_name : unique_species) {
csvFile << species_name;
if (i < unique_species.size() - 1) {
csvFile << ",";
}
i++;
}
csvFile << "\n";
for (const auto& [time, data] : g_callbackHistory) {
csvFile << time << ",";
size_t j = 0;
for (const auto& species_name : unique_species) {
if (!data.contains(species_name)) {
csvFile << "0.0";
} else {
csvFile << data.at(species_name).second;
}
if (j < unique_species.size() - 1) {
csvFile << ",";
}
++j;
}
csvFile << "\n";
}
csvFile.close();
}
void log_callback_data(const double temp) {
if (s_wrote_abundance_history) {
std::cout << "Saving abundance history to abundance_history.csv" << std::endl;
save_callback_data("abundance_history_" + std::to_string(temp) + ".csv");
}
}
void quill_terminate_handler()
{
log_callback_data(1.5e7);
quill::Backend::stop();
if (g_previousHandler)
g_previousHandler();
@@ -224,36 +64,36 @@ void quill_terminate_handler()
std::abort();
}
void callback_main(const gridfire::solver::PointSolverTimestepContext& ctx) {
record_abundance_history_callback(ctx);
}
int main() {
GF_PAR_INIT();
int main(int argc, char* argv[]) {
using namespace gridfire;
constexpr size_t breaks = 1;
double temp = 1.5e7;
double rho = 1.5e2;
double tMax = 3.1536e+16/breaks;
double rho = 1.6e2;
double tMax = 3e17;
double coupling_ratio = 5.0;
std::string output_filename = "coupling.dat";
CLI::App app("GridFire Test Coupling");
app.add_option("--temperature", temp, "Temperature in degrees")->default_val(std::format("{:5.2E}", temp));
app.add_option("--density", rho, "Density in Kg")->default_val(std::format("{:5.2E}", rho));
app.add_option("--tmax", tMax, "Maximum time in seconds")->default_val(std::format("{:5.2E}", tMax));
app.add_option("--coupling_ratio", coupling_ratio, "Coupling ratio for multiscale partitioning")->default_val(std::format("{:.2f}", coupling_ratio));
app.add_option("--output", output_filename, "Output filename for intermediate results")->default_val("coupling.dat");
CLI11_PARSE(app, argc, argv);
const NetIn netIn = init(temp, rho, tMax);
policy::MainSequencePolicy stellarPolicy(netIn.composition);
auto [engine, ctx_template] = stellarPolicy.construct();
std::println("Sandbox Engine Stack: {}", stellarPolicy);
std::println("Scratch Blob State: {}", *ctx_template);
auto base_engine = std::make_unique<engine::GraphEngine>(netIn.composition, 3);
auto base_blob = base_engine->constructStateBlob();
auto qse_engine = std::make_unique<engine::MultiscalePartitioningEngineView>(*base_engine);
auto blob = qse_engine->constructStateBlob(base_blob.get());
constexpr size_t nZones = 100;
std::array<NetIn, nZones> netIns;
for (size_t zone = 0; zone < nZones; ++zone) {
netIns[zone] = netIn;
netIns[zone].temperature = 1.5e7;
}
auto* state = engine::scratch::get_state<engine::scratch::MultiscalePartitioningEngineViewScratchPad, true>(*blob);
const solver::PointSolver localSolver(engine);
solver::GridSolverContext solverCtx(*ctx_template);
const solver::GridSolver gridSolver(engine, localSolver);
const solver::PointSolver localSolver(*base_engine);
solver::PointSolverContext solverCtx(*base_blob);
std::vector<NetOut> netOuts = gridSolver.evaluate(solverCtx, netIns | std::ranges::to<std::vector>());
}
auto result = localSolver.evaluate(solverCtx, netIn, false, false);
}

View File

@@ -5,4 +5,5 @@
# Subdirectories for unit and integration tests
subdir('graphnet_sandbox')
subdir('flux_coupling')
subdir('extern')

View File

@@ -6,7 +6,8 @@ from datetime import datetime
import os
import sys
from gridfire.solver import CVODETimestepContext
from gridfire.solver import PointSolverTimestepContext
from gridfire._gridfire.engine.scratchpads import StateBlob
import gridfire
class LogEntries(Enum):
@@ -23,15 +24,16 @@ class StepLogger:
self.num_steps : int = 0
self.steps : List[Dict[LogEntries, Any]] = []
def log_step(self, ctx : CVODETimestepContext):
def log_step(self, ctx: PointSolverTimestepContext):
comp_data: Dict[str, SupportsFloat] = {}
for species in ctx.engine.getNetworkSpecies():
sid = ctx.engine.getSpeciesIndex(species)
for species in ctx.engine.getNetworkSpecies(ctx.state_ctx):
sid = ctx.engine.getSpeciesIndex(ctx.state_ctx, species)
comp_data[species.name()] = ctx.state[sid]
entry : Dict[LogEntries, Any] = {
LogEntries.Step: ctx.num_steps,
LogEntries.t: ctx.t,
LogEntries.dt: ctx.dt,
LogEntries.eps: ctx.state[-1],
LogEntries.Composition: comp_data,
}
self.steps.append(entry)
@@ -43,6 +45,7 @@ class StepLogger:
LogEntries.Step.value: step[LogEntries.Step],
LogEntries.t.value: step[LogEntries.t],
LogEntries.dt.value: step[LogEntries.dt],
LogEntries.eps.value: step[LogEntries.eps],
LogEntries.Composition.value: step[LogEntries.Composition],
}
for step in self.steps
@@ -74,4 +77,4 @@ class StepLogger:
"FinalTime": final_step[LogEntries.t],
"FinalComposition": final_step[LogEntries.Composition],
}
return summary_data
return summary_data

View File

@@ -6,6 +6,8 @@ from fourdst.composition import CanonicalComposition
from fourdst.atomic import Species
from gridfire.type import NetIn
from logger import StepLogger
def rescale_composition(comp_ref : Composition, ZZs : float, Y_primordial : float = 0.248) -> Composition:
CC : CanonicalComposition = comp_ref.getCanonicalComposition()
@@ -61,13 +63,17 @@ def years_to_seconds(years: float) -> float:
def main():
C = init_composition()
netIn = init_netIn(2.75e6, 1.5e1, years_to_seconds(10e9), C)
netIn = init_netIn(1.5e7, 1.6e2, years_to_seconds(10e9), C)
policy = MainSequencePolicy(C)
construct = policy.construct()
solver = PointSolver(construct.engine)
solver_ctx = PointSolverContext(construct.scratch_blob)
results = solver.evaluate(solver_ctx, netIn, False, False)
print(results)
stepLogger = StepLogger()
solver_ctx.callback = lambda ctx: stepLogger.log_step(ctx);
solver.evaluate(solver_ctx, netIn, False, False)
stepLogger.to_json("test_single.json", TestName="test_single");
if __name__ == "__main__":
main()

298
tools/gf_bbn/main.cpp Normal file
View File

@@ -0,0 +1,298 @@
// ReSharper disable CppUnusedIncludeDirective
#include <iostream>
#include <fstream>
#include <chrono>
#include <thread>
#include <format>
#include "gridfire/gridfire.h"
#include "fourdst/composition/composition.h"
#include "fourdst/logging/logging.h"
#include "fourdst/atomic/species.h"
#include "fourdst/composition/utils.h"
#include "quill/Logger.h"
#include "quill/Backend.h"
#include "CLI/CLI.hpp"
#include <clocale>
#include <cmath>
#include "gridfire/utils/gf_omp.h"
#include "nlohmann/json.hpp"
struct IntermediateResult {
double time;
fourdst::composition::Composition comp;
double current_energy;
double current_neutrino_loss_rate;
};
static std::vector<IntermediateResult> g_callbackHistory;
gridfire::NetIn init(const double temp, const double rho, const double tMax) {
std::setlocale(LC_ALL, "");
quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
logger->set_log_level(quill::LogLevel::Info);
using namespace gridfire;
constexpr double XpXn = 7.17;
constexpr double Xn = 1.0 / (1.0 + XpXn);
constexpr double Xp = 1 - Xn;
const std::vector<double> X = {Xp, Xn};
const std::vector<std::string> symbols = {"H-1", "n-1"};
const fourdst::composition::Composition composition = fourdst::composition::buildCompositionFromMassFractions(symbols, X);
NetIn netIn;
netIn.composition = composition;
netIn.temperature = temp;
netIn.density = rho;
netIn.energy = 0;
netIn.tMax = tMax;
netIn.dt0 = 1e-12;
return netIn;
}
void log_results(const gridfire::NetOut& netOut, const gridfire::NetIn& netIn) {
std::vector<fourdst::atomic::Species> logSpecies = {
fourdst::atomic::H_1,
fourdst::atomic::He_3,
fourdst::atomic::He_4,
fourdst::atomic::C_12,
fourdst::atomic::N_14,
fourdst::atomic::O_16,
fourdst::atomic::Ne_20,
fourdst::atomic::Mg_24
};
std::vector<double> initial;
std::vector<double> final;
std::vector<double> delta;
std::vector<double> fractional;
for (const auto& species : logSpecies) {
double initial_X = netIn.composition.getMassFraction(species);
double final_X = netOut.composition.getMassFraction(species);
double delta_X = final_X - initial_X;
double fractionalChange = (delta_X) / initial_X * 100.0;
initial.push_back(initial_X);
final.push_back(final_X);
delta.push_back(delta_X);
fractional.push_back(fractionalChange);
}
initial.push_back(0.0); // Placeholder for energy
final.push_back(netOut.energy);
delta.push_back(netOut.energy);
fractional.push_back(0.0); // Placeholder for energy
initial.push_back(0.0);
final.push_back(netOut.dEps_dT);
delta.push_back(netOut.dEps_dT);
fractional.push_back(0.0);
initial.push_back(0.0);
final.push_back(netOut.dEps_dRho);
delta.push_back(netOut.dEps_dRho);
fractional.push_back(0.0);
initial.push_back(0.0);
final.push_back(netOut.specific_neutrino_energy_loss);
delta.push_back(netOut.specific_neutrino_energy_loss);
fractional.push_back(0.0);
initial.push_back(0.0);
final.push_back(netOut.specific_neutrino_flux);
delta.push_back(netOut.specific_neutrino_flux);
fractional.push_back(0.0);
initial.push_back(netIn.composition.getMeanParticleMass());
final.push_back(netOut.composition.getMeanParticleMass());
delta.push_back(final.back() - initial.back());
fractional.push_back((final.back() - initial.back()) / initial.back() * 100.0);
std::vector<std::string> rowLabels = [&]() -> std::vector<std::string> {
std::vector<std::string> labels;
for (const auto& species : logSpecies) {
labels.emplace_back(species.name());
}
labels.emplace_back("ε");
labels.emplace_back("dε/dT");
labels.emplace_back("dε/dρ");
labels.emplace_back("Eν");
labels.emplace_back("Fν");
labels.emplace_back("<μ>");
return labels;
}();
gridfire::utils::Column<std::string> paramCol("Parameter", rowLabels);
gridfire::utils::Column<double> initialCol("Initial", initial);
gridfire::utils::Column<double> finalCol ("Final", final);
gridfire::utils::Column<double> deltaCol ("δ", delta);
gridfire::utils::Column<double> percentCol("% Change", fractional);
std::vector<std::unique_ptr<gridfire::utils::ColumnBase>> columns;
columns.push_back(std::make_unique<gridfire::utils::Column<std::string>>(paramCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(initialCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(finalCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(deltaCol));
columns.push_back(std::make_unique<gridfire::utils::Column<double>>(percentCol));
gridfire::utils::print_table("Simulation Results", columns);
}
void record_abundance_history_callback(const gridfire::solver::PointSolverTimestepContext& ctx) {
const auto& engine = ctx.engine;
std::vector<double> Y;
for (const auto& species : engine.getNetworkSpecies(ctx.state_ctx)) {
const size_t sid = engine.getSpeciesIndex(ctx.state_ctx, species);
double y = N_VGetArrayPointer(ctx.state)[sid];
Y.push_back(y > 0.0 ? y : 0.0); // Regularize tiny negative abundances to zero
}
const fourdst::composition::Composition comp(engine.getNetworkSpecies(ctx.state_ctx), Y);
IntermediateResult stepResult;
stepResult.comp = comp;
stepResult.time = ctx.t;
stepResult.current_energy = ctx.current_total_energy;
stepResult.current_neutrino_loss_rate = ctx.current_neutrino_energy_loss_rate;
g_callbackHistory.push_back(stepResult);
}
void callback_main(const gridfire::solver::PointSolverTimestepContext& ctx) {
record_abundance_history_callback(ctx);
}
void save_callback(const std::string& filename) {
// Save to JSON
nlohmann::json j;
for (const auto& record : g_callbackHistory) {
nlohmann::json entry;
entry["time"] = record.time;
entry["current_energy"] = record.current_energy;
entry["current_neutrino_loss_rate"] = record.current_neutrino_loss_rate;
// make a sub-json for composition
nlohmann::json comp_json;
for (const auto& [species, abundance] : record.comp) {
comp_json[species.name()] = abundance;
}
entry["composition"] = comp_json;
j.push_back(entry);
}
std::ofstream ofs(filename);
ofs << j.dump(4);
}
double T9(const double age) {
return 10.0/std::sqrt(age);
}
double density(const double age) {
return 4e-5 * std::pow(T9(age), 3);
}
int main(int argc, char** argv) {
GF_PAR_INIT();
using namespace gridfire;
double tMax = 3600;
double h = 0.1;
CLI::App app("GridFire Quick BBN Test");
app.add_option("--tmax", tMax, "Maximum Time in seconds")->default_val(std::format("{:5.2E}", tMax));
app.add_option("--s", h, "Geometric Timestep Scale Factor")->default_val(std::format("{:5.2f}", h));
CLI11_PARSE(app, argc, argv);
NetIn netIn = init(0, 0, tMax);
const engine::GraphEngine engine(netIn.composition);
auto blob = std::make_unique<engine::scratch::StateBlob>();
blob->enroll<engine::scratch::GraphEngineScratchPad>();
auto* graph_engine_state = engine::scratch::get_state<engine::scratch::GraphEngineScratchPad, false>(*blob);
graph_engine_state->initialize(engine);
solver::PointSolverContext solver_ctx(*blob);
solver::PointSolver solver(engine);
solver_ctx.stdout_logging=true;
double current_time = 180;
nlohmann::json j;
nlohmann::json meta;
meta["tMax"] = tMax;
meta["tStart"] = current_time;
meta["h"] = h;
j.push_back(meta);
nlohmann::json steps;
while (current_time < tMax) {
nlohmann::json entry;
double current_dt = h * current_time;
double next_time = current_time + current_dt;
netIn.tMax = current_dt;
double current_temp = T9(current_time) * 1e9;
double next_temp = T9(next_time) * 1e9;
double burn_temp = (current_temp + next_temp)/2.0;
double current_density = density(current_time);
double next_density = density(next_time);
double burn_density = (current_density + next_density)/2.0;
netIn.temperature = burn_temp;
netIn.density = burn_density;
fourdst::composition::Composition initial_comp = netIn.composition;
NetOut result = solver.evaluate(solver_ctx, netIn);
netIn.composition = result.composition;
std::println("Time: {:5.2E} (+{:5.2E}), Burn Temp: {:5.2E}, Burn Density: {:5.2E}", current_time, current_dt, burn_temp, burn_density);
const fourdst::composition::Composition& comp = result.composition;
auto Xi = [&](const std::string& symbol) -> double {
if (!initial_comp.contains(symbol)) {
return 0;
}
return initial_comp.getMassFraction(symbol);
};
entry["t"] = current_time;
entry["T"] = burn_temp;
entry["D"] = burn_density;
entry["mu"] = comp.getMeanParticleMass();
auto lifetimes = engine.getSpeciesTimescales(*blob, comp, burn_temp, burn_density);
if (lifetimes.has_value()) {
entry["tau_n"] = lifetimes.value().at(fourdst::atomic::n_1);
}
for (const auto& [sp, finalY] : comp) {
double initialY = Xi(std::string(sp.name()));
entry[std::format("{}_f", sp.name())] = finalY;
entry[std::format("{}_i", sp.name())] = initialY;
}
steps.push_back(entry);
current_time += current_dt;
}
j.push_back(steps);
std::ofstream out("BBNResults.json");
out << j.dump(4);
}

1
tools/gf_bbn/meson.build Normal file
View File

@@ -0,0 +1 @@
executable('gf_bbn', 'main.cpp', dependencies: [gridfire_dep, cli11_dep])

View File

@@ -21,128 +21,26 @@
#include "gridfire/utils/gf_omp.h"
#include "nlohmann/json.hpp"
static std::terminate_handler g_previousHandler = nullptr;
static std::vector<std::pair<double, std::unordered_map<std::string, std::pair<double, double>>>> g_callbackHistory;
static bool s_wrote_abundance_history = false;
void quill_terminate_handler();
struct IntermediateResult {
double time{};
fourdst::composition::Composition comp;
gridfire::reaction::ReactionSet reactions;
std::vector<double> reaction_flows;
double current_energy{};
double current_neutrino_loss_rate{};
using namespace fourdst::composition;
Composition rescale(const Composition& comp, double target_X, double target_Z) {
// 1. Validate inputs
if (target_X < 0.0 || target_Z < 0.0 || (target_X + target_Z) > 1.0 + 1e-14) {
throw std::invalid_argument("Target mass fractions X and Z must be non-negative and sum to <= 1.0");
}
gridfire::reaction::ReactionSet inactive_reactions;
std::vector<double> inactive_reaction_flows;
};
// Force high precision for the target Y to ensure X+Y+Z = 1.0 exactly in our logic
long double ld_target_X = static_cast<long double>(target_X);
long double ld_target_Z = static_cast<long double>(target_Z);
long double ld_target_Y = 1.0L - ld_target_X - ld_target_Z;
// Clamp Y to 0 if it dipped slightly below due to precision (e.g. X+Z=1.0000000001)
if (ld_target_Y < 0.0L) ld_target_Y = 0.0L;
// 2. Manually calculate current Mass Totals (bypass getCanonicalComposition to avoid crashes)
long double total_mass_H = 0.0L;
long double total_mass_He = 0.0L;
long double total_mass_Z = 0.0L;
// We need to iterate and identify species types manually
// Standard definition: H (z=1), He (z=2), Metals (z>2)
// Note: We use long double accumulators to prevent summation drift
for (const auto& [spec, molar_abundance] : comp) {
// Retrieve atomic properties.
// Note: usage assumes fourdst::atomic::Species has .z() and .mass()
// consistent with the provided composition.cpp
int z = spec.z();
double a = spec.mass();
long double mass_contribution = static_cast<long double>(molar_abundance) * static_cast<long double>(a);
if (z == 1) {
total_mass_H += mass_contribution;
} else if (z == 2) {
total_mass_He += mass_contribution;
} else {
total_mass_Z += mass_contribution;
}
}
long double total_mass_current = total_mass_H + total_mass_He + total_mass_Z;
// Edge case: Empty composition
if (total_mass_current <= 0.0L) {
// Return empty or throw? If input was empty, return empty.
if (comp.size() == 0) return comp;
throw std::runtime_error("Input composition has zero total mass.");
}
// 3. Calculate Scaling Factors
// Factor = (Target_Mass_Fraction / Old_Mass_Fraction)
// = (Target_Mass_Fraction) / (Old_Group_Mass / Total_Mass)
// = (Target_Mass_Fraction * Total_Mass) / Old_Group_Mass
long double scale_H = 0.0L;
long double scale_He = 0.0L;
long double scale_Z = 0.0L;
if (ld_target_X > 1e-16L) {
if (total_mass_H <= 1e-19L) {
throw std::runtime_error("Cannot rescale Hydrogen to " + std::to_string(target_X) +
" because input has no Hydrogen.");
}
scale_H = (ld_target_X * total_mass_current) / total_mass_H;
}
if (ld_target_Y > 1e-16L) {
if (total_mass_He <= 1e-19L) {
throw std::runtime_error("Cannot rescale Helium to " + std::to_string((double)ld_target_Y) +
" because input has no Helium.");
}
scale_He = (ld_target_Y * total_mass_current) / total_mass_He;
}
if (ld_target_Z > 1e-16L) {
if (total_mass_Z <= 1e-19L) {
throw std::runtime_error("Cannot rescale Metals to " + std::to_string(target_Z) +
" because input has no Metals.");
}
scale_Z = (ld_target_Z * total_mass_current) / total_mass_Z;
}
// 4. Apply Scaling and Construct New Vectors
std::vector<fourdst::atomic::Species> new_species;
std::vector<double> new_abundances;
new_species.reserve(comp.size());
new_abundances.reserve(comp.size());
for (const auto& [spec, abundance] : comp) {
new_species.push_back(spec);
long double factor = 0.0L;
int z = spec.z();
if (z == 1) {
factor = scale_H;
} else if (z == 2) {
factor = scale_He;
} else {
factor = scale_Z;
}
// Calculate new abundance in long double then cast back
long double new_val_ld = static_cast<long double>(abundance) * factor;
new_abundances.push_back(static_cast<double>(new_val_ld));
}
return Composition(new_species, new_abundances);
}
static std::vector<IntermediateResult> g_callbackHistory;
gridfire::NetIn init(const double temp, const double rho, const double tMax) {
std::setlocale(LC_ALL, "");
g_previousHandler = std::set_terminate(quill_terminate_handler);
quill::Logger* logger = fourdst::logging::LogManager::getInstance().getLogger("log");
logger->set_log_level(quill::LogLevel::Info);
logger->set_log_level(quill::LogLevel::TraceL2);
using namespace gridfire;
const std::vector<double> X = {0.7081145999999999, 2.94e-5, 0.276, 0.003, 0.0011, 9.62e-3, 1.62e-3, 5.16e-4};
@@ -255,9 +153,7 @@ void log_results(const gridfire::NetOut& netOut, const gridfire::NetIn& netIn) {
void record_abundance_history_callback(const gridfire::solver::PointSolverTimestepContext& ctx) {
s_wrote_abundance_history = true;
const auto& engine = ctx.engine;
// std::unordered_map<std::string, std::pair<double, double>> abundances;
std::vector<double> Y;
for (const auto& species : engine.getNetworkSpecies(ctx.state_ctx)) {
const size_t sid = engine.getSpeciesIndex(ctx.state_ctx, species);
@@ -265,80 +161,84 @@ void record_abundance_history_callback(const gridfire::solver::PointSolverTimest
Y.push_back(y > 0.0 ? y : 0.0); // Regularize tiny negative abundances to zero
}
fourdst::composition::Composition comp(engine.getNetworkSpecies(ctx.state_ctx), Y);
const fourdst::composition::Composition comp(engine.getNetworkSpecies(ctx.state_ctx), Y);
IntermediateResult stepResult;
stepResult.comp = comp;
stepResult.time = ctx.t;
stepResult.current_energy = ctx.current_total_energy;
stepResult.current_neutrino_loss_rate = ctx.current_neutrino_energy_loss_rate;
stepResult.reactions = engine.getNetworkReactions(ctx.state_ctx);
std::unordered_map<std::string, std::pair<double, double>> abundances;
for (const auto& sp : comp | std::views::keys) {
abundances.emplace(std::string(sp.name()), std::make_pair(sp.mass(), comp.getMolarAbundance(sp)));
for (const auto& reactionPtr : stepResult.reactions) {
double flow = engine.calculateMolarReactionFlow(ctx.state_ctx, *reactionPtr, comp, ctx.T9, ctx.rho);
stepResult.reaction_flows.push_back(flow);
}
g_callbackHistory.emplace_back(ctx.t, abundances);
stepResult.inactive_reactions = engine.getInactiveNetworkReactions(ctx.state_ctx);
for (const auto& reactionPtr : stepResult.inactive_reactions) {
double flow = engine.getInactiveReactionMolarReactionFlow(ctx.state_ctx, *reactionPtr, comp, ctx.T9, ctx.rho);
stepResult.inactive_reaction_flows.push_back(flow);
}
g_callbackHistory.push_back(stepResult);
}
void save_callback_data(const std::string_view filename) {
std::set<std::string> unique_species;
for (const auto &abundances: g_callbackHistory | std::views::values) {
for (const auto &species_name: abundances | std::views::keys) {
unique_species.insert(species_name);
}
}
std::ofstream csvFile(filename.data(), std::ios::out);
csvFile << "t,";
size_t i = 0;
for (const auto& species_name : unique_species) {
csvFile << species_name;
if (i < unique_species.size() - 1) {
csvFile << ",";
}
i++;
}
csvFile << "\n";
for (const auto& [time, data] : g_callbackHistory) {
csvFile << time << ",";
size_t j = 0;
for (const auto& species_name : unique_species) {
if (!data.contains(species_name)) {
csvFile << "0.0";
} else {
csvFile << data.at(species_name).second;
}
if (j < unique_species.size() - 1) {
csvFile << ",";
}
++j;
}
csvFile << "\n";
}
csvFile.close();
}
void log_callback_data(const double temp) {
if (s_wrote_abundance_history) {
std::cout << "Saving abundance history to abundance_history.csv" << std::endl;
save_callback_data("abundance_history_" + std::to_string(temp) + ".csv");
}
}
void quill_terminate_handler()
{
log_callback_data(1.5e7);
quill::Backend::stop();
if (g_previousHandler)
g_previousHandler();
else
std::abort();
}
void callback_main(const gridfire::solver::PointSolverTimestepContext& ctx) {
record_abundance_history_callback(ctx);
}
void save_callback(const std::string& filename) {
// Save to JSON
nlohmann::json j;
for (const auto& record : g_callbackHistory) {
nlohmann::json entry;
entry["time"] = record.time;
entry["current_energy"] = record.current_energy;
entry["current_neutrino_loss_rate"] = record.current_neutrino_loss_rate;
// make a sub-json for composition
nlohmann::json comp_json;
for (const auto& [species, abundance] : record.comp) {
comp_json[species.name()] = abundance;
}
entry["composition"] = comp_json;
entry["reactions"] = nlohmann::json::array();
for (const auto& [reaction, flow] : std::views::zip(record.reactions, record.reaction_flows)) {
nlohmann::json reaction_info;
reaction_info["id"] = reaction->id();
reaction_info["flow"] = flow;
reaction_info["species"] = nlohmann::json::array();
reaction_info["Q"] = reaction->qValue();
for (const auto& sp : reaction->all_species()) {
nlohmann::json species_info;
species_info["name"] = sp.name();
species_info["stoichiometry"] = reaction->stoichiometry(sp);
reaction_info["species"].push_back(species_info);
}
entry["reactions"].push_back(reaction_info);
}
entry["inactive_reactions"] = nlohmann::json::array();
for (const auto& [reaction, flow] : std::views::zip(record.inactive_reactions, record.inactive_reaction_flows)) {
nlohmann::json reaction_info;
reaction_info["id"] = reaction->id();
reaction_info["flow"] = flow;
reaction_info["species"] = nlohmann::json::array();
reaction_info["Q"] = reaction->qValue();
for (const auto& sp : reaction->all_species()) {
nlohmann::json species_info;
species_info["name"] = sp.name();
species_info["stoichiometry"] = reaction->stoichiometry(sp);
reaction_info["species"].push_back(species_info);
}
entry["inactive_reactions"].push_back(reaction_info);
}
j.push_back(entry);
}
std::ofstream ofs(filename);
ofs << j.dump(4);
}
int main(int argc, char** argv) {
GF_PAR_INIT();
using namespace gridfire;
@@ -346,12 +246,18 @@ int main(int argc, char** argv) {
double temp = 1.5e7;
double rho = 1.5e2;
double tMax = 3.1536e+16;
bool save_intermediate_results = false;
bool display_trigger = false;
std::string output_filename = "abundance_history.json";
CLI::App app("GridFire Quick CLI Test");
app.add_option("--temp", temp, "Initial Temperature")->default_val(std::format("{:5.2E}", temp));
app.add_option("--rho", rho, "Initial Density")->default_val(std::format("{:5.2E}", rho));
app.add_option("--tmax", tMax, "Maximum Time")->default_val(std::format("{:5.2E}", tMax));
app.add_option("--save_intermediate_results", save_intermediate_results, "Save Intermediate Results")->default_val("false");
app.add_option("--output", output_filename, "Output filename for intermediate results")->default_val("abundance_history.json");
app.add_option("--display_trigger_explanations", display_trigger, "Display trigger explanations during run")->default_val("false");
CLI11_PARSE(app, argc, argv);
NetIn netIn = init(temp, rho, tMax);
@@ -361,7 +267,14 @@ int main(int argc, char** argv) {
solver::PointSolverContext solver_context(*ctx_template);
solver::PointSolver solver(engine);
if (save_intermediate_results) {
solver_context.callback = solver::TimestepCallback(callback_main);
}
NetOut result = solver.evaluate(solver_context, netIn);
NetOut result = solver.evaluate(solver_context, netIn, display_trigger);
log_results(result, netIn);
if (save_intermediate_results) {
save_callback(output_filename);
}
}

View File

@@ -2,4 +2,5 @@ if get_option('build_tools')
subdir('gf_config')
subdir('gf_quick')
subdir('gf_multi')
endif
subdir('gf_bbn')
endif

View File

@@ -0,0 +1,20 @@
import pynucastro as pyna
def main():
lib = pyna.ReacLibLibrary()
filtered_rates = []
for rate in lib.get_rates():
if all(nuc.Z <= 26 for nuc in rate.reactants):
filtered_rates.append(rate)
out_file = "reaclib_pynucastro_latest_Z26.dat"
with open(out_file, "w") as f:
for rate in filtered_rates:
rate.write_to_file(f)
if __name__ == "__main__":
main()

View File

@@ -58,6 +58,18 @@ those changes get upstreamed into this utility script. It is a key development g
that all of our tools are well documented and easy for non `SERiF` developers
to use.
## Pynucastro rates
If you have pynucastro installed you may use the `gen_rates.py` function to generate
a rate file dump. Currently this file filters by Z but you can update this with
arbitrary internal logic. After generating this file run it through format.py with -f bin and -o reactions.bin
and then the bin to header script.
```bash
python ../bin_to_header.py reactions.bin reactions_data.h raw_reaction_data
```
then copy the reactions_data.h files to `GridFire/src/include/gridfire/reactions`
## Citations
REACLIB:
- Rauscher, T., Heger, A., Hoffman, R. D., & Woosley, S. E. 2010, ApJS, 189, 240.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,472 @@
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate
import pynucastro as pyna
import os
import sys
import importlib.util
import time
import matplotlib.lines as mlines
import re
import json
import argparse
from fourdst.composition import Composition
from gridfire.type import NetIn
from gridfire.engine import GraphEngine
from gridfire.solver import PointSolver, PointSolverContext
from tqdm import tqdm
from fourdst.composition.utils import buildCompositionFromMassFractions
def T9(age):
return 10.0 / np.sqrt(age)
def get_density(age):
return 4e-5 * (T9(age) ** 3)
def get_pyna_rate(my_rate_str, library):
match = re.match(r"([a-zA-Z0-9]+)\(([^,]*),([^)]*)\)(.*)", my_rate_str)
if not match:
print(f"Could not parse string format: {my_rate_str}")
return []
target = match.group(1)
projectile = match.group(2)
ejectiles = match.group(3)
product = match.group(4)
def expand_species(s_str):
if not s_str or s_str.strip() == "":
return []
parts = s_str.split()
expanded = []
for p in parts:
if p == 'g':
continue
mult_match = re.match(r"^(\d+)([a-zA-Z0-9]+)$", p)
if mult_match:
count = int(mult_match.group(1))
spec = mult_match.group(2)
else:
count = 1
spec = p
if spec == 'g':
continue
if spec == 'a': spec = 'he4'
expanded.extend([spec] * count)
return expanded
reactants_str = [target] + expand_species(projectile)
products_str = expand_species(ejectiles) + [product]
try:
r_nuc = [pyna.Nucleus(r) for r in reactants_str]
p_nuc = [pyna.Nucleus(p) for p in products_str]
except Exception as e:
print(f"Error converting nuclei for {my_rate_str}: {e}")
return []
rates = library.get_rate_by_nuclei(r_nuc, p_nuc)
if rates:
if not isinstance(rates, list):
return [rates]
return rates
r_nuc_names = sorted([str(n) for n in r_nuc])
p_nuc_names = sorted([str(n) for n in p_nuc])
ignore_list = ['e-', 'e+', 'g', 'nu', 'anu']
matched_rates = []
for rate in library.get_rates():
lib_r_names = sorted([str(n) for n in rate.reactants if str(n) not in ignore_list])
lib_p_names = sorted([str(n) for n in rate.products if str(n) not in ignore_list])
if r_nuc_names == lib_r_names and p_nuc_names == lib_p_names:
matched_rates.append(rate)
return matched_rates
def load_network_module(filepath):
module_name = os.path.basename(filepath).replace(".py", "")
if module_name in sys.modules:
del sys.modules[module_name]
spec = importlib.util.spec_from_file_location(module_name, filepath)
if spec is None:
raise FileNotFoundError(f"Error: could not find module at {filepath}")
network_module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = network_module
spec.loader.exec_module(network_module)
return network_module
def main(args):
tMax = 3600.0
h = 0.01
current_time = 180.0
XpXn = 7.17
Xn = 1.0 / (1.0 + XpXn)
Xp = 1.0 - Xn
comp: Composition = buildCompositionFromMassFractions(["H-1", "n-1"], [Xp, Xn])
netIn = NetIn()
netIn.composition = comp
netIn.dt0 = 1e-12
if args.depth is not None:
print(f"Initializing GridFire GraphEngine with restricted depth = {args.depth}")
engine = GraphEngine(comp, args.depth)
else:
print("Initializing full-depth GridFire GraphEngine (Note: pynucastro may take a long time to run JIT, set NUMBA_DISABLE_JIT=1 as an eviromental variable to disable JIT, this makes per timestep time increase but may still be faster for large networks due to the lack of upfront compilation time)")
engine = GraphEngine(comp)
blob = engine.constructStateBlob()
solver_ctx = PointSolverContext(blob)
solver_ctx.stdout_logging = False
solver = PointSolver(engine)
gf_initial_Y = {}
for sp in engine.getNetworkSpecies(solver_ctx.engine_ctx):
if comp.contains(sp):
gf_initial_Y[sp.name()] = comp.getMolarAbundance(sp)
else:
gf_initial_Y[sp.name()] = 0.0
gf_time = []
gf_results = {}
step_conditions = []
gf_start_time = time.time()
gf_current_time = current_time
total_steps = int(np.ceil(np.log(tMax / current_time) / np.log(1 + h)))
with tqdm(total=total_steps, desc="GridFire BBN", unit="step") as pbar:
while gf_current_time < tMax:
current_dt = h * gf_current_time
next_time = gf_current_time + current_dt
burn_temp = (T9(gf_current_time) + T9(next_time)) / 2.0 * 1e9
burn_density = (get_density(gf_current_time) + get_density(next_time)) / 2.0
netIn.temperature = burn_temp
netIn.density = burn_density
netIn.tMax = current_dt
netOut = solver.evaluate(solver_ctx, netIn)
netIn.composition = netOut.composition
pbar.update(1)
pbar.set_postfix(t=f"{gf_current_time:.2e}", T=f"{burn_temp:.2e}", rho=f"{burn_density:.2e}")
step_conditions.append({
"dt": current_dt,
"T": burn_temp,
"rho": burn_density,
"t": gf_current_time
})
gf_time.append(gf_current_time)
for sp in engine.getNetworkSpecies(solver_ctx.engine_ctx):
name = sp.name()
if name not in gf_results:
gf_results[name] = []
gf_results[name].append(netOut.composition.getMolarAbundance(sp))
gf_current_time += current_dt
gf_end_time = time.time()
print(f"GridFire integration finished in {gf_end_time - gf_start_time:.4f} seconds.")
print("Building Pynucastro BBN Network...")
reaclib_library = pyna.ReacLibLibrary()
rate_names = [r.id().replace("e+","").replace("e-","").replace(", ", ",") for r in engine.getNetworkReactions(solver_ctx.engine_ctx)]
goodRates = []
missingRates = []
skipped_photo_rates = 0
pyna_rate_mapping = {}
import io
import contextlib
for r_str in rate_names:
pyna_rates_for_reaction = []
with contextlib.redirect_stdout(io.StringIO()), contextlib.redirect_stderr(io.StringIO()):
try:
res = reaclib_library.get_rate_by_name(r_str)
if res is not None:
if isinstance(res, list):
pyna_rates_for_reaction.extend(res)
else:
pyna_rates_for_reaction.append(res)
except:
pass
if not pyna_rates_for_reaction:
res_nuc = get_pyna_rate(r_str, reaclib_library)
if res_nuc:
if isinstance(res_nuc, list):
pyna_rates_for_reaction.extend(res_nuc)
else:
pyna_rates_for_reaction.append(res_nuc)
if pyna_rates_for_reaction:
pyna_rate_mapping[r_str] = pyna_rates_for_reaction
for rate in pyna_rates_for_reaction:
if args.filter_photo:
is_photo_rate = any(str(r).lower() in ['g', 'gamma'] for r in rate.reactants)
if is_photo_rate:
skipped_photo_rates += 1
continue
goodRates.append(rate)
else:
missingRates.append(r_str)
if missingRates:
print(f"Warning: Could not map {len(missingRates)} rates to Pynucastro (likely absent from default ReacLib).")
print(f"Missing sample: {missingRates[:10]}...")
if args.filter_photo:
print(f"Info: Skipped {skipped_photo_rates} photodisintegration rates due to --filter-photo flag.")
print("--- Evaluating reaction rates over all temperatures ---")
gf_rates_history = {}
py_rates_history = {}
gf_rate_labels = {}
py_rate_labels = {}
for reaction in engine.getNetworkReactions(solver_ctx.engine_ctx):
r_str = reaction.id().replace("e+","").replace("e-","").replace(", ", ",")
gf_rates_history[r_str] = []
py_rates_history[r_str] = []
try:
gf_rate_labels[r_str] = reaction.sources()
except AttributeError:
try:
gf_rate_labels[r_str] = reaction.sourceLabel()
except AttributeError:
gf_rate_labels[r_str] = "Unknown"
if r_str in pyna_rate_mapping:
py_rate_labels[r_str] = [getattr(pr, 'label', 'Unknown') for pr in pyna_rate_mapping[r_str]]
else:
py_rate_labels[r_str] = []
for step in tqdm(step_conditions, desc="Calculating Rates", unit="step"):
T9_val = step["T"] / 1e9
T_K = step["T"]
for reaction in engine.getNetworkReactions(solver_ctx.engine_ctx):
r_str = reaction.id().replace("e+","").replace("e-","").replace(", ", ",")
gf_rate_val = 0.0
try:
gf_rate_val = reaction.calculate_rate(T9_val, 0, [])
except:
try:
gf_rate_val = reaction.calculate_rate(T9_val, 0, 0, 0, [], dict())
except Exception as e:
pass
gf_rates_history[r_str].append(gf_rate_val)
py_rate_val = 0.0
if r_str in pyna_rate_mapping:
for pr in pyna_rate_mapping[r_str]:
py_rate_val += pr.eval(T_K)
py_rates_history[r_str].append(py_rate_val)
print("--- Rate Comparison Summary ---")
threshold = 1e-4
mismatches = {}
for r_str in gf_rates_history:
gf_arr = np.array(gf_rates_history[r_str])
py_arr = np.array(py_rates_history[r_str])
with np.errstate(divide='ignore', invalid='ignore'):
denom = np.where(py_arr != 0, py_arr, gf_arr)
denom = np.where(denom == 0, 1e-30, denom)
rel_diffs = np.abs(gf_arr - py_arr) / denom
max_diff = np.max(rel_diffs)
if max_diff > threshold:
max_idx = np.argmax(rel_diffs)
mismatches[r_str] = {
"max_diff": max_diff,
"temp": step_conditions[max_idx]["T"],
"gf_val": gf_arr[max_idx],
"py_val": py_arr[max_idx]
}
if mismatches:
print(f"Found {len(mismatches)} rates with differences > {threshold:.2%}")
for r_str, info in mismatches.items():
gf_lbl = gf_rate_labels.get(r_str, 'Unknown')
py_lbl = py_rate_labels.get(r_str, [])
print(f"{r_str:20}: Max Diff = {info['max_diff']:.2%}, at T = {info['temp']:.2e} K")
print(f" GF = {info['gf_val']:.4e} (Source: {gf_lbl})")
print(f" Py = {info['py_val']:.4e} (Sources: {py_lbl})")
else:
print(f"All rates match within the {threshold:.2%} threshold across all temperatures.")
print("-------------------------------")
pynet = pyna.PythonNetwork(rates=goodRates)
network_file = "pynuc_bbn_network.py"
pynet.write_network(network_file)
net = load_network_module(network_file)
mapping = {
"H-1": ("p", "tab:blue"),
"n-1": ("n", "tab:orange"),
"He-4": ("he4", "tab:green"),
"H-2": ("d", "tab:red"),
"H-3": ("t", "tab:purple"),
"He-3": ("he3", "tab:brown"),
"Li-7": ("li7", "tab:pink"),
"Be-7": ("be7", "tab:gray")
}
Y0 = np.zeros(net.nnuc)
for i, nuc in enumerate(pynet.get_nuclei()):
nuc_name = str(nuc)
gf_name = None
for gf, (py, _) in mapping.items():
if py == nuc_name:
gf_name = gf
break
if not gf_name:
match = re.match(r"([a-zA-Z]+)(\d+)", nuc_name)
if match:
gf_name = f"{match.group(1).capitalize()}-{match.group(2)}"
if gf_name and gf_name in gf_initial_Y:
Y0[i] = gf_initial_Y[gf_name]
pyna_time = []
pyna_nuc_names = [str(n) for n in pynet.get_nuclei()]
pyna_results = {nuc: [] for nuc in pyna_nuc_names}
pyna_start_time = time.time()
for step in tqdm(step_conditions, unit="step", desc="pynucastro Integration"):
sol = scipy.integrate.solve_ivp(
net.rhs,
[0, step["dt"]],
Y0,
args=(step["rho"], step["T"]),
method="Radau",
jac=net.jacobian,
rtol=1e-8,
atol=1e-20
)
Y0 = sol.y[:, -1]
pyna_time.append(step["t"])
for j in range(net.nnuc):
nuc_name = str(pynet.get_nuclei()[j])
if nuc_name in pyna_results:
pyna_results[nuc_name].append(Y0[j])
pyna_end_time = time.time()
print(f"Pynucastro integration finished in {pyna_end_time - pyna_start_time:.4f} seconds.")
export_data = {
"metadata": {
"tMax": tMax,
"h": h,
"initial_time": current_time,
"initial_XpXn_ratio": XpXn,
"initial_mass_fractions": {
"Xp": Xp,
"Xn": Xn
},
"execution_times_seconds": {
"gridfire": gf_end_time - gf_start_time,
"pynucastro": pyna_end_time - pyna_start_time
},
"missing_pynucastro_rates": missingRates,
"skipped_photodisintegration_rates": skipped_photo_rates if args.filter_photo else 0,
"rate_labels": {
"gridfire": gf_rate_labels,
"pynucastro": py_rate_labels
}
},
"thermodynamic_conditions": step_conditions,
"data": {
"gridfire": {
"time": gf_time,
"molar_abundances": gf_results,
"reaction_rates": gf_rates_history
},
"pynucastro": {
"time": pyna_time,
"molar_abundances": pyna_results,
"reaction_rates": py_rates_history
}
}
}
json_out_file = "bbn_simulation_data.json"
with open(json_out_file, "w") as f:
json.dump(export_data, f, indent=4)
plt.style.use("default")
fig, ax = plt.subplots(figsize=(10, 7))
for gf_name, (pyna_name, color) in mapping.items():
if gf_name in gf_results:
ax.plot(gf_time, gf_results[gf_name], color=color, linestyle="-", linewidth=2.5, label=f"GF {gf_name}")
if pyna_name in pyna_results:
ax.plot(pyna_time, pyna_results[pyna_name], color=color, linestyle="--", linewidth=1.5, label=f"Pyna {pyna_name}")
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_ylim(1e-12, 2)
ax.set_xlabel("Time (s)", fontsize=14)
ax.set_ylabel("Molar Abundance (Y)", fontsize=14)
line_gf = mlines.Line2D([], [], color='black', linestyle='-', linewidth=2.5, label='GridFire')
line_py = mlines.Line2D([], [], color='black', linestyle='--', linewidth=1.5, label='Pynucastro')
sp_handles = []
for gf_name, (pyna_name, color) in mapping.items():
sp_handles.append(mlines.Line2D([], [], color=color, linestyle='-', linewidth=2, label=gf_name))
ax.legend(handles=[line_gf, line_py] + sp_handles, loc='center left', bbox_to_anchor=(1.02, 0.5), fontsize=12)
out_file = "bbn_comparison.pdf"
plt.savefig(out_file)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="GridFire vs Pynucastro BBN Comparison")
parser.add_argument("--filter-photo", action="store_true",
help="Filter out photodisintegration (reverse) rates to mimic GridFire's forward-only mechanics.")
parser.add_argument("--depth", type=int, default=None,
help="Limit the assembly depth of GridFire's GraphEngine. E.g., setting '--depth 3' shrinks the network size from 5000+ reactions to ~100, which reduces Pynucastro's Numba JIT compile time from hours to seconds.")
args = parser.parse_args()
main(args)

View File

@@ -0,0 +1,196 @@
import numpy as np
import pandas as pd
from IPython.core.pylabtools import figsize
from gridfire.solver import PointSolver, PointSolverContext
from gridfire.policy import MainSequencePolicy
from gridfire.engine import GraphEngine, MultiscalePartitioningEngineView, AdaptiveEngineView
from gridfire.engine import NetworkBuildDepth
from fourdst.composition.utils import buildCompositionFromMassFractions
from scipy.signal import find_peaks
from gridfire.config import GridFireConfig
from fourdst.composition import Composition
from scipy.integrate import trapezoid
from fourdst.composition import CanonicalComposition
from fourdst.atomic import Species
from gridfire.type import NetIn, NetOut
import matplotlib.pyplot as plt
## Note that my default style uses tex rendering. If you do not have tex installed
## simply comment out this line
plt.style.use("../utils/pub.mplstyle")
from scipy.interpolate import interp1d, CubicSpline
from enum import Enum
import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "../utils")))
from logger import StepLogger
class ShowSave(Enum):
SHOW="SHOW"
SAVE="SAVE"
def __str__(self):
return self.value
def rescale_composition(comp_ref : Composition, ZZs : float, Y_primordial : float = 0.248) -> Composition:
CC : CanonicalComposition = comp_ref.getCanonicalComposition()
dY_dZ = (CC.Y - Y_primordial) / CC.Z
Z_new = CC.Z * (10**ZZs)
Y_bulk_new = Y_primordial + (dY_dZ * Z_new)
X_new = 1.0 - Z_new - Y_bulk_new
if X_new < 0: raise ValueError(f"ZZs={ZZs} yields unphysical composition (X < 0)")
ratio_H = X_new / CC.X if CC.X > 0 else 0
ratio_He = Y_bulk_new / CC.Y if CC.Y > 0 else 0
ratio_Z = Z_new / CC.Z if CC.Z > 0 else 0
Y_new_list = []
newComp : Composition = Composition()
s: Species
for s in comp_ref.getRegisteredSpecies():
Xi_ref = comp_ref.getMassFraction(s)
if s.el() == "H":
Xi_new = Xi_ref * ratio_H
elif s.el() == "He":
Xi_new = Xi_ref * ratio_He
else:
Xi_new = Xi_ref * ratio_Z
Y = Xi_new / s.mass()
newComp.registerSpecies(s)
newComp.setMolarAbundance(s, Y)
return newComp
def init_composition(ZZs : float = 0) -> Composition:
X_GS98 = [0.73395, 0.00005, 0.2490, 0.00281, 0.00101, 0.00883, 0.00149, 0.00064, 0.00066, 0.00035, 0.00008, 0.00006, 0.00107]
S_GS98 = ["H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Ne-20", "Mg-24", "Si-28", "S-32", "Ar-36", "Ca-40", "Fe-56"]
return buildCompositionFromMassFractions(S_GS98, X_GS98)
def init_netIn(temp: float, rho: float, time: float, comp: Composition) -> NetIn:
n : NetIn = NetIn()
n.temperature = temp
n.density = rho
n.tMax = time
n.dt0 = 1e-12
n.composition = comp
return n
def years_to_seconds(years: float) -> float:
return years * 3.1536e7
def quantify_engine_error(df_base, df_approx, r_base: NetOut, r_approx: NetOut, species_list, floor_val=1e-30):
temporal_results = {}
final_state_results = {}
t_base = df_base['t'].values
tracking_cols = ['eps'] + species_list
for col in tracking_cols:
if col not in df_base.columns or col not in df_approx.columns:
continue
y_base = df_base[col].values
interpolator = interp1d(
df_approx['t'],
df_approx[col],
kind='linear',
bounds_error=False,
fill_value=(df_approx[col].iloc[0], df_approx[col].iloc[-1])
)
y_approx_interp = interpolator(t_base)
abs_diff = np.abs(y_approx_interp - y_base)
rel_diff = abs_diff / np.maximum(np.abs(y_base), floor_val)
l2_diff = np.sqrt(trapezoid(abs_diff**2, x=t_base))
l2_base = np.sqrt(trapezoid(y_base**2, x=t_base))
temporal_results[col] = {
'Max Rel Error (Temporal)': np.max(rel_diff),
'L2 Rel Error (Temporal)': l2_diff / max(l2_base, floor_val)
}
def calc_rel_err(val_approx, val_base):
return abs(val_approx - val_base) / max(abs(val_base), floor_val)
final_state_results['Energy'] = {
'Final Rel Error': calc_rel_err(r_approx.energy, r_base.energy)
}
final_state_results['Neutrino Loss'] = {
'Final Rel Error': calc_rel_err(r_approx.specific_neutrino_energy_loss, r_base.specific_neutrino_energy_loss)
}
for sp in species_list:
try:
val_base = r_base.composition[sp]
val_approx = r_approx.composition[sp]
final_state_results[f"Final {sp}"] = {
'Final Rel Error': calc_rel_err(val_approx, val_base)
}
except (KeyError, TypeError, AttributeError):
pass
return pd.DataFrame(temporal_results).T, pd.DataFrame(final_state_results).T
def main(save_show):
C = init_composition()
netIn = init_netIn(10**7.1760912591, 10**2.2041199827, 1e18, C)
stepLogger = StepLogger()
engine_graph = GraphEngine(C, 4)
blob = engine_graph.constructStateBlob()
print(f"Gridfire Using: {len(engine_graph.getNetworkReactions(blob))} Reactions and {len(engine_graph.getNetworkSpecies(blob))} Species")
print(engine_graph.getNetworkReactions(blob))
print(engine_graph.getNetworkSpecies(blob))
solver_ctx_graph = PointSolverContext(blob)
solver_ctx_graph.stdout_logging = False
solver_ctx_graph.callback = lambda ctx: stepLogger.log_step(ctx)
solver_single = PointSolver(engine_graph)
r_graph = solver_single.evaluate(solver_ctx_graph, netIn, False, False)
df_graph : pd.DataFrame = stepLogger.df
df_graph.to_csv("bbq_graph.csv", index=False)
stepLogger.reset()
QSE_engine = MultiscalePartitioningEngineView(engine_graph)
solver_ctx_graph_qse = PointSolverContext(QSE_engine.constructStateBlob(engine_graph.constructStateBlob()))
solver_ctx_graph_qse.stdout_logging = False
solver_ctx_graph_qse.callback = lambda ctx: stepLogger.log_step(ctx)
solver_QSE = PointSolver(QSE_engine)
r_qse = solver_QSE.evaluate(solver_ctx_graph_qse, netIn, False, False)
df_qse : pd.DataFrame = stepLogger.df
df_qse.to_csv("bbq_qse.csv", index=False)
stepLogger.reset()
if __name__ == "__main__":
import argparse
app = argparse.ArgumentParser(prog="Derivative Smoothness", description="Generate of view plots of derivative smoothness")
app.add_argument("-s", type=ShowSave, default=ShowSave.SHOW, choices=list(ShowSave), help="Whether to show or save the generated plot")
args = app.parse_args()
main(args.s)

View File

@@ -0,0 +1,20 @@
42a43
> real(dp) :: nuc_eval_time
358a360,361
> integer*8 :: count_start, count_end, count_rate
>
361c364
< eps_neu
---
> eps_neu, eval_time
437a441
> call system_clock(count_rate=count_rate)
438a443
> call system_clock(count_start)
451a457,460
> call system_clock(count_end)
> eval_time = real(count_end - count_start, dp) / real(count_rate, dp)
>
>
455a465
> out% nuc_eval_time = eval_time

View File

@@ -0,0 +1,33 @@
10a11,14
> !--- EMB (April 11, 2026. GridFire Comparison Timing) ---
> real(dp) :: total_eval_time
> !---
>
22a27,29
> real :: t_start, t_end
>
> total_eval_time = 0.0d0
40a48
> write(*,*) "Calling do_hydrostatic_burn ", j, "th time"
41a50
>
44a54,57
> write(*,*) "============================"
> write(*,*) "Network Evaluation Wall Time: ", total_eval_time
> write(*,*) "============================"
>
130a144,146
> integer*8 :: count_start, count_end, count_rate
> real(dp) :: eval_time
>
131a148,151
>
> call system_clock(count_rate=count_rate)
>
> call system_clock(count_start)
132a153,154
> call system_clock(count_end)
>
133a156,157
>
> total_eval_time = total_eval_time + out% nuc_eval_time

View File

@@ -0,0 +1,356 @@
5,8c5,9
< ! This program is free software: you can redistribute it and/or modify
< ! it under the terms of the GNU Lesser General Public License
< ! as published by the Free Software Foundation,
< ! either version 3 of the License, or (at your option) any later version.
---
> ! MESA is free software; you can use it and/or modify
> ! it under the combined terms and restrictions of the MESA MANIFESTO
> ! and the GNU General Library Public License as published
> ! by the Free Software Foundation; either version 2 of the License,
> ! or (at your option) any later version.
10c11,15
< ! This program is distributed in the hope that it will be useful,
---
> ! You should have received a copy of the MESA MANIFESTO along with
> ! this software; if not, it is available at the mesa website:
> ! http://mesa.sourceforge.net/
> !
> ! MESA is distributed in the hope that it will be useful,
13c18
< ! See the GNU Lesser General Public License for more details.
---
> ! See the GNU Library General Public License for more details.
15,16c20,22
< ! You should have received a copy of the GNU Lesser General Public License
< ! along with this program. If not, see <https://www.gnu.org/licenses/>.
---
> ! You should have received a copy of the GNU Library General Public License
> ! along with this software; if not, write to the Free Software
> ! Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
21c27
< use const_def, only: dp, Qconv
---
> use const_def
24a31
>
25a33
>
28c36
<
---
>
30c38,39
<
---
>
>
33c42
<
---
>
36c45
<
---
>
40d48
< contains
41a50,53
>
>
> contains
>
55c67
<
---
>
61c73
<
---
>
71c83
<
---
>
74c86
<
---
>
86c98
< use num_lib
---
> use num_lib
94c106
<
---
>
98,101c110,113
< real(dp), intent(in) :: t_start, t_end, starting_x(:) ! (species)
< integer, intent(in) :: ntimes ! ending time is times(num_times); starting time is 0
< real(dp), pointer, intent(in) :: times(:) ! (num_times)
< real(dp), pointer, intent(in) :: log10Ts_f1(:)
---
> real(dp), intent(in) :: t_start, t_end, starting_x(:) ! (species)
> integer, intent(in) :: ntimes ! ending time is times(num_times); starting time is 0
> real(dp), pointer, intent(in) :: times(:) ! (num_times)
> real(dp), pointer, intent(in) :: log10Ts_f1(:)
109c121
< real(dp), intent(in), pointer :: rate_factors(:) ! (num_reactions)
---
> real(dp), intent(in), pointer :: rate_factors(:) ! (num_reactions)
111,113c123,125
< real(dp), pointer, intent(in) :: reaction_Qs(:) ! (rates_reaction_id_max)
< real(dp), pointer, intent(in) :: reaction_neuQs(:) ! (rates_reaction_id_max)
< integer, intent(in) :: screening_mode ! see screen_def
---
> real(dp), pointer, intent(in) :: reaction_Qs(:) ! (rates_reaction_id_max)
> real(dp), pointer, intent(in) :: reaction_neuQs(:) ! (rates_reaction_id_max)
> integer, intent(in) :: screening_mode ! see screen_def
115,116c127,128
< integer, intent(in) :: max_steps ! maximal number of allowed steps.
< real(dp), intent(in) :: eps, odescal ! tolerances. e.g., set both to 1d-6
---
> integer, intent(in) :: max_steps ! maximal number of allowed steps.
> real(dp), intent(in) :: eps, odescal ! tolerances. e.g., set both to 1d-6
131c143
<
---
>
136c148
<
---
>
141c153
<
---
>
150c162
<
---
>
152c164
<
---
>
154c166
<
---
>
156c168
<
---
>
158c170
<
---
>
161c173
<
---
>
171c183
<
---
>
173c185
<
---
>
181c193
<
---
>
193c205
<
---
>
198c210
<
---
>
204c216
<
---
>
214c226
<
---
>
218c230
<
---
>
220c232
<
---
>
227c239
<
---
>
229c241
< call setup_net_info(n)
---
> call setup_net_info(n)
231c243
<
---
>
267c279
< cid = g% chem_id(i)
---
> cid = g% chem_id(i)
274c286
<
---
>
276c288
<
---
>
284c296
< real(dp) :: dxdt_sum, dxdt_sum_approx21, &
---
> real(dp) :: dxdt_sum, dxdt_sum_aprox21, &
291c303
<
---
>
293c305
<
---
>
298,299c310,311
< if (ierr /= 0) return
<
---
> if (ierr /= 0) return
>
309c321
<
---
>
321c333
<
---
>
332c344
<
---
>
336c348
<
---
>
342c354
< real(dp) :: d_eps_nuc_dx(species)
---
> real(dp) :: d_eps_nuc_dx(species)
347c359
<
---
>
354c366
<
---
>
360a373,375
> character(len=255) :: env_val
> integer :: env_val_status, env_val_length
>
362c377
<
---
>
367c382
<
---
>
371c386
<
---
>
373c388
<
---
>
376c391
<
---
>
378c393
<
---
>
394,395c409,410
<
< xsum = 0
---
>
> xsum = 0
403c418
< end if
---
> end if
412c427
<
---
>
415a431
>
416a433,446
> call get_environment_variable("BBQ_DISABLE_EOS", value=env_val, length=env_val_length, status=env_val_status)
> if (trim(env_val) == "False") then
> call eosDT_get( &
> eos_handle, species, g% chem_id, g% net_iso, x, &
> Rho, lgRho, T, lgT, &
> res, d_dlnd, d_dlnT, d_dxa, ierr)
> if (ierr /= 0) then
> if (report_ierr) write(*,*) 'failed in eosDT_get'
> return
> end if
> eta = res(i_eta)
> d_eta_dlnT = d_dlnT(i_eta)
> d_eta_dlnRho = d_dlnd(i_eta)
> endif
418,430c448
< call eosDT_get( &
< eos_handle, species, g% chem_id, g% net_iso, x, &
< Rho, lgRho, T, lgT, &
< res, d_dlnd, d_dlnT, d_dxa, ierr)
< if (ierr /= 0) then
< if (report_ierr) write(*,*) 'failed in eosDT_get'
< return
< end if
< eta = res(i_eta)
< d_eta_dlnT = d_dlnT(i_eta)
< d_eta_dlnRho = d_dlnd(i_eta)
<
<
---
>
433c451
<
---
>
446c464
<
---
>
470c488,489
<
---
>
>
471a491,493
>
>
>
473a496,500
>
>
>
>
>
475a503
>

View File

@@ -0,0 +1,11 @@
diff files for tests run in GridFire paper 1
Apply these diffs to BBQ and MESA files then compile
Note that to disable EOS evaluation in mesa set the BBQ_DISABLE_EOS enviromental variable to 1
i.e.
export BBQ_DISABLE_EOS=1
prior to running bbq

View File

@@ -0,0 +1,56 @@
&bbq
! Physcis options
net_name = 'mesa_125.net'
! Solver tolerances
max_steps = 1000000
eps = 1d-8
odescal = 1d-10
stptry = 0
! What mode to run
use_hydrostatic=.true.
write_iso_list = .true.
iso_list_filename = 'iso.list'
/
&sampling ! For both use_input_file and use_random_sampling
/
&profile ! For use_profile
/
&hydrostatic ! for use with use_hydrostatic
min_time = -8
max_time = 17
log_time =.true.
num_times = 300
logT = 7.1760912591
logRho = 2.2041199827
input_composition_filename = 'comp.txt'
output_filename = 'output.txt'
/
&eos
/
&nuclear
/
&controls
screening_mode = ''
/

View File

@@ -0,0 +1,56 @@
&bbq
! Physcis options
net_name = 'mesa_235.net'
! Solver tolerances
max_steps = 1000000
eps = 1d-8
odescal = 1d-10
stptry = 0
! What mode to run
use_hydrostatic=.true.
write_iso_list = .true.
iso_list_filename = 'iso.list'
/
&sampling ! For both use_input_file and use_random_sampling
/
&profile ! For use_profile
/
&hydrostatic ! for use with use_hydrostatic
min_time = -8
max_time = 17
log_time =.true.
num_times = 300
logT = 7.1760912591
logRho = 2.2041199827
input_composition_filename = 'comp.txt'
output_filename = 'output.txt'
/
&eos
/
&nuclear
/
&controls
screening_mode = ''
/

View File

@@ -0,0 +1,56 @@
&bbq
! Physcis options
net_name = 'mesa_45.net'
! Solver tolerances
max_steps = 1000000
eps = 1d-8
odescal = 1d-10
stptry = 0
! What mode to run
use_hydrostatic=.true.
write_iso_list = .true.
iso_list_filename = 'iso.list'
/
&sampling ! For both use_input_file and use_random_sampling
/
&profile ! For use_profile
/
&hydrostatic ! for use with use_hydrostatic
min_time = -8
max_time = 17
log_time =.true.
num_times = 300
logT = 7.1760912591
logRho = 2.2041199827
input_composition_filename = 'comp.txt'
output_filename = 'output.txt'
/
&eos
/
&nuclear
/
&controls
screening_mode = ''
/

View File

@@ -0,0 +1,56 @@
&bbq
! Physcis options
net_name = 'mesa_495.net'
! Solver tolerances
max_steps = 1000000
eps = 1d-8
odescal = 1d-10
stptry = 0
! What mode to run
use_hydrostatic=.true.
write_iso_list = .true.
iso_list_filename = 'iso.list'
/
&sampling ! For both use_input_file and use_random_sampling
/
&profile ! For use_profile
/
&hydrostatic ! for use with use_hydrostatic
min_time = -8
max_time = 17
log_time =.true.
num_times = 300
logT = 7.1760912591
logRho = 2.2041199827
input_composition_filename = 'comp.txt'
output_filename = 'output.txt'
/
&eos
/
&nuclear
/
&controls
screening_mode = ''
/

View File

@@ -0,0 +1,56 @@
&bbq
! Physcis options
net_name = 'basic.net'
! Solver tolerances
max_steps = 1000000
eps = 1d-8
odescal = 1d-10
stptry = 0
! What mode to run
use_hydrostatic=.true.
write_iso_list = .true.
iso_list_filename = 'iso.list'
/
&sampling ! For both use_input_file and use_random_sampling
/
&profile ! For use_profile
/
&hydrostatic ! for use with use_hydrostatic
min_time = -8
max_time = 17
log_time =.true.
num_times = 300
logT = 7.1760912591
logRho = 2.2041199827
input_composition_filename = 'comp.txt'
output_filename = 'output.txt'
/
&eos
/
&nuclear
/
&controls
screening_mode = ''
/

View File

@@ -0,0 +1,56 @@
&bbq
! Physcis options
net_name = 'pp_extras.net'
! Solver tolerances
max_steps = 1000000
eps = 1d-8
odescal = 1d-10
stptry = 0
! What mode to run
use_hydrostatic=.true.
write_iso_list = .true.
iso_list_filename = 'iso.list'
/
&sampling ! For both use_input_file and use_random_sampling
/
&profile ! For use_profile
/
&hydrostatic ! for use with use_hydrostatic
min_time = -8
max_time = 17
log_time =.true.
num_times = 300
logT = 7.1760912591
logRho = 2.2041199827
input_composition_filename = 'comp.txt'
output_filename = 'output.txt'
/
&eos
/
&nuclear
/
&controls
screening_mode = ''
/

Binary file not shown.

View File

@@ -0,0 +1,342 @@
import numpy as np
import pandas as pd
from IPython.core.pylabtools import figsize
from gridfire.solver import PointSolver, PointSolverContext
from gridfire.policy import MainSequencePolicy
from gridfire.engine import GraphEngine, MultiscalePartitioningEngineView, AdaptiveEngineView
from gridfire.engine import NetworkBuildDepth
from scipy.signal import find_peaks
from gridfire.config import GridFireConfig
from fourdst.composition import Composition
from scipy.integrate import trapezoid
from fourdst.composition import CanonicalComposition
from fourdst.atomic import Species
from gridfire.type import NetIn, NetOut
import matplotlib.pyplot as plt
## Note that my default style uses tex rendering. If you do not have tex installed
## simply comment out this line
plt.style.use("../utils/pub.mplstyle")
from scipy.interpolate import interp1d, CubicSpline
from enum import Enum
import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "../utils")))
from logger import StepLogger
class ShowSave(Enum):
SHOW="SHOW"
SAVE="SAVE"
def __str__(self):
return self.value
def rescale_composition(comp_ref : Composition, ZZs : float, Y_primordial : float = 0.248) -> Composition:
CC : CanonicalComposition = comp_ref.getCanonicalComposition()
dY_dZ = (CC.Y - Y_primordial) / CC.Z
Z_new = CC.Z * (10**ZZs)
Y_bulk_new = Y_primordial + (dY_dZ * Z_new)
X_new = 1.0 - Z_new - Y_bulk_new
if X_new < 0: raise ValueError(f"ZZs={ZZs} yields unphysical composition (X < 0)")
ratio_H = X_new / CC.X if CC.X > 0 else 0
ratio_He = Y_bulk_new / CC.Y if CC.Y > 0 else 0
ratio_Z = Z_new / CC.Z if CC.Z > 0 else 0
Y_new_list = []
newComp : Composition = Composition()
s: Species
for s in comp_ref.getRegisteredSpecies():
Xi_ref = comp_ref.getMassFraction(s)
if s.el() == "H":
Xi_new = Xi_ref * ratio_H
elif s.el() == "He":
Xi_new = Xi_ref * ratio_He
else:
Xi_new = Xi_ref * ratio_Z
Y = Xi_new / s.mass()
newComp.registerSpecies(s)
newComp.setMolarAbundance(s, Y)
return newComp
def init_composition(ZZs : float = 0) -> Composition:
Y_solar = [7.0262E-01, 1.7479E-06, 6.8955E-02, 2.5000E-04, 7.8554E-05, 6.0144E-04, 8.1031E-05, 2.1513E-05]
S = ["H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Ne-20", "Mg-24"]
return rescale_composition(Composition(S, Y_solar), ZZs)
def init_netIn(temp: float, rho: float, time: float, comp: Composition) -> NetIn:
n : NetIn = NetIn()
n.temperature = temp
n.density = rho
n.tMax = time
n.dt0 = 1e-12
n.composition = comp
return n
def years_to_seconds(years: float) -> float:
return years * 3.1536e7
def quantify_engine_error(df_base, df_approx, r_base: NetOut, r_approx: NetOut, species_list, floor_val=1e-30):
temporal_results = {}
final_state_results = {}
t_base = df_base['t'].values
tracking_cols = ['eps'] + species_list
for col in tracking_cols:
if col not in df_base.columns or col not in df_approx.columns:
continue
y_base = df_base[col].values
interpolator = interp1d(
df_approx['t'],
df_approx[col],
kind='linear',
bounds_error=False,
fill_value=(df_approx[col].iloc[0], df_approx[col].iloc[-1])
)
y_approx_interp = interpolator(t_base)
abs_diff = np.abs(y_approx_interp - y_base)
rel_diff = abs_diff / np.maximum(np.abs(y_base), floor_val)
l2_diff = np.sqrt(trapezoid(abs_diff**2, x=t_base))
l2_base = np.sqrt(trapezoid(y_base**2, x=t_base))
temporal_results[col] = {
'Max Rel Error (Temporal)': np.max(rel_diff),
'L2 Rel Error (Temporal)': l2_diff / max(l2_base, floor_val)
}
def calc_rel_err(val_approx, val_base):
return abs(val_approx - val_base) / max(abs(val_base), floor_val)
final_state_results['Energy'] = {
'Final Rel Error': calc_rel_err(r_approx.energy, r_base.energy)
}
final_state_results['Neutrino Loss'] = {
'Final Rel Error': calc_rel_err(r_approx.specific_neutrino_energy_loss, r_base.specific_neutrino_energy_loss)
}
for sp in species_list:
try:
val_base = r_base.composition[sp]
val_approx = r_approx.composition[sp]
final_state_results[f"Final {sp}"] = {
'Final Rel Error': calc_rel_err(val_approx, val_base)
}
except (KeyError, TypeError, AttributeError):
pass
return pd.DataFrame(temporal_results).T, pd.DataFrame(final_state_results).T
def main(save_show):
C = init_composition()
netIn = init_netIn(1.5e7, 1.6e2, years_to_seconds(10e9), C)
stepLogger = StepLogger()
engine_graph = GraphEngine(C, 4)
solver_ctx_graph = PointSolverContext(engine_graph.constructStateBlob())
solver_ctx_graph.stdout_logging = True
solver_ctx_graph.callback = lambda ctx: stepLogger.log_step(ctx)
solver_single = PointSolver(engine_graph)
r_graph = solver_single.evaluate(solver_ctx_graph, netIn, False, False)
df_graph : pd.DataFrame = stepLogger.df
stepLogger.reset()
QSE_engine = MultiscalePartitioningEngineView(engine_graph)
solver_ctx_graph_qse = PointSolverContext(QSE_engine.constructStateBlob(engine_graph.constructStateBlob()))
solver_ctx_graph_qse.stdout_logging = True
solver_ctx_graph_qse.callback = lambda ctx: stepLogger.log_step(ctx)
solver_QSE = PointSolver(QSE_engine)
r_qse = solver_QSE.evaluate(solver_ctx_graph_qse, netIn, False, False)
df_qse = stepLogger.df
stepLogger.reset()
# policy = MainSequencePolicy(C)
# construct = policy.construct()
# solver_AE_QSE = PointSolver(construct.engine)
# solver_ctx_graph_qse_ae = PointSolverContext(construct.scratch_blob)
# solver_ctx_graph_qse_ae.callback = lambda ctx: stepLogger.log_step(ctx)
# solver_ctx_graph_qse_ae.stdout_logging = False
#
# r_ae_qse = solver_AE_QSE.evaluate(solver_ctx_graph_qse_ae, netIn, False, False)
#
# df_ae_qse = stepLogger.df
# stepLogger.reset()
# fig, axs = plt.subplots(2, 1, figsize=(10, 7))
S = ["H-1", "He-4", "C-12", "N-14", "O-16", "Mg-24"]
t = np.logspace(7, 17.5, 5000)
# for spID, sp in enumerate(S):
# gf = interp1d(df_graph.t, df_graph[sp])
# qf = interp1d(df_qse.t, df_qse[sp])
#
# ax = axs[0]
# ax.loglog(t, gf(t), 'o-', color=f"C{spID}")
# ax.loglog(t, qf(t), 'o', color=f"C{spID}", linestyle='dashed')
#
# ax.text(1, df_graph[sp].iloc[0]*1.1, sp, fontsize=12, color=f"C{spID}")
#
# ax = axs[1]
# ax.semilogx(t, (qf(t)-gf(t))/gf(t), color=f"C{spID}")
#
# axs[1].set_xlabel("Time [s]", fontsize=15)
# axs[0].set_ylabel("Molar Abundance [mol/g]", fontsize=15)
# axs[1].set_ylabel("Relative Error", fontsize=15)
#
# fig, ax = plt.subplots(1, 1, figsize=(10, 7))
# ge = interp1d(df_graph.t, df_graph.eps)
# qe = interp1d(df_qse.t, df_qse.eps)
# ax.loglog(t, np.abs((qe(t) - ge(t)) / ge(t)))
temporal_err_qse, final_err_qse = quantify_engine_error(
df_base=df_graph,
df_approx=df_qse,
r_base=r_graph,
r_approx=r_qse,
species_list=S
)
qse_rel_eps_error = (df_graph.eps.iloc[-1] - df_qse.eps.iloc[-1])/df_qse.eps.iloc[-1]
fig, ax = plt.subplots(1, 1, figsize=(10, 7))
# ax.semilogx(df_graph.t, df_graph["H-2"], 'o-', color='red')
# ax.semilogx(df_qse.t, df_qse["H-2"], 'o', color='blue', linestyle='dashed')
graph_h1 = interp1d(df_graph.t, df_graph["H-1"])
qse_h1 = interp1d(df_qse.t, df_qse["H-1"])
graph_h2 = interp1d(df_graph.t, df_graph["H-2"])
qse_h2 = interp1d(df_qse.t, df_qse["H-2"])
graph_DH = graph_h2(t)/graph_h1(t)
qse_DH = qse_h2(t)/qse_h1(t)
dex_diff = np.abs(np.log10(graph_h2(t)) - np.log10(qse_h2(t)))
dex_dh_diff = np.abs(np.log10(graph_DH) - np.log10(qse_DH))
# ax.semilogx(t, dex_diff, color='green')
ax.loglog(t, dex_dh_diff, color='black')
# ax.semilogx(t, qse_h2(t)/qse_h1(t), color='green')
ax.set_xlabel("Time [s]", fontsize=17)
ax.set_ylabel(r"$\left|\Delta\log_{10}\right|$ [dex]", fontsize=17)
if save_show == ShowSave.SAVE:
plt.savefig("DHErr.pdf")
plt.close()
else:
plt.show()
sums_qse = {}
sums_graph = {}
symbols = {}
for sp, y in r_qse.composition:
z = sp.z()
symbols[z] = sp.el()
y_graph = r_graph.composition.getMolarAbundance(sp)
sums_qse[int(z)] = sums_qse.get(z, 0.0) + y
sums_graph[int(z)] = sums_graph.get(z, 0.0) + y_graph
print(sums_qse[3])
print(sums_graph[3])
z_list = sorted(sums_qse.keys())
dex_list = []
symbols = [val for key, val in symbols.items()]
for z in z_list:
total_qse = sums_qse[z]
total_graph = sums_graph[z]
if total_graph > 1e-13 and total_qse > 1e-13:
offset = np.log10(total_qse / total_graph)
else:
if z >= 14:
offset = np.nan
else:
offset = 0.0
dex_list.append(offset)
fig, ax = plt.subplots(1, 1, figsize=(10, 7))
data = sorted(zip(z_list, symbols, dex_list), key=lambda x: x[0])
sorted_z, sorted_symbols, sorted_dex = zip(*data)
print(sorted_symbols)
print(sorted_dex)
# 2. Create the plot
fig, ax = plt.subplots(1, 1, figsize=(12, 6))
print(sorted_symbols)
bars = ax.bar(sorted_symbols, sorted_dex, color='grey', edgecolor='grey', alpha=0.8)
ax.axhline(0, color='black', linewidth=0.8)
ax.set_xlabel('Element', fontsize=25)
ax.set_ylabel('Offset [dex]', fontsize=25)
if save_show == ShowSave.SAVE:
plt.savefig("DexElementalOffset.pdf")
plt.close()
e_graph = interp1d(df_graph.t, df_graph.eps)
e_qse = interp1d(df_qse.t, df_qse.eps)
dex_eps_diff = np.log10(e_graph(t)) - np.log10(e_qse(t))
fig, ax = plt.subplots(1, 1, figsize=(10, 7))
ax.semilogx(t, dex_eps_diff, color='black')
ax.set_xlabel("Time [s]", fontsize=25)
ax.set_ylabel("Offset [dex]", fontsize=25)
if save_show == ShowSave.SAVE:
plt.savefig("DexEpsOffset.pdf")
plt.close()
if save_show == ShowSave.SHOW:
plt.show()
print("=== QSE ===")
print(temporal_err_qse)
print(final_err_qse)
print(f"Relative ε error: {qse_rel_eps_error}")
print(f"Neutrino Loss Difference [dex]: {np.log10(r_graph.specific_neutrino_energy_loss) - np.log10(r_qse.specific_neutrino_energy_loss)}")
if __name__ == "__main__":
import argparse
app = argparse.ArgumentParser(prog="Derivative Smoothness", description="Generate of view plots of derivative smoothness")
app.add_argument("-s", type=ShowSave, default=ShowSave.SHOW, choices=list(ShowSave), help="Whether to show or save the generated plot")
args = app.parse_args()
main(args.s)

View File

@@ -0,0 +1,187 @@
import numpy as np
from IPython.core.pylabtools import figsize
from gridfire.solver import PointSolver, PointSolverContext
from gridfire.policy import MainSequencePolicy
from gridfire.engine import GraphEngine, MultiscalePartitioningEngineView
from scipy.signal import find_peaks
from gridfire.config import GridFireConfig
from fourdst.composition import Composition
from scipy.integrate import trapezoid
from fourdst.composition import CanonicalComposition
from fourdst.atomic import Species
from gridfire.type import NetIn
import matplotlib.pyplot as plt
## Note that my default style uses tex rendering. If you do not have tex installed
## simply comment out this line
plt.style.use("../utils/pub.mplstyle")
from scipy.interpolate import interp1d, CubicSpline
from enum import Enum
import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "../utils")))
from logger import StepLogger
class ShowSave(Enum):
SHOW="SHOW"
SAVE="SAVE"
def __str__(self):
return self.value
def rescale_composition(comp_ref : Composition, ZZs : float, Y_primordial : float = 0.248) -> Composition:
CC : CanonicalComposition = comp_ref.getCanonicalComposition()
dY_dZ = (CC.Y - Y_primordial) / CC.Z
Z_new = CC.Z * (10**ZZs)
Y_bulk_new = Y_primordial + (dY_dZ * Z_new)
X_new = 1.0 - Z_new - Y_bulk_new
if X_new < 0: raise ValueError(f"ZZs={ZZs} yields unphysical composition (X < 0)")
ratio_H = X_new / CC.X if CC.X > 0 else 0
ratio_He = Y_bulk_new / CC.Y if CC.Y > 0 else 0
ratio_Z = Z_new / CC.Z if CC.Z > 0 else 0
Y_new_list = []
newComp : Composition = Composition()
s: Species
for s in comp_ref.getRegisteredSpecies():
Xi_ref = comp_ref.getMassFraction(s)
if s.el() == "H":
Xi_new = Xi_ref * ratio_H
elif s.el() == "He":
Xi_new = Xi_ref * ratio_He
else:
Xi_new = Xi_ref * ratio_Z
Y = Xi_new / s.mass()
newComp.registerSpecies(s)
newComp.setMolarAbundance(s, Y)
return newComp
def init_composition(ZZs : float = 0) -> Composition:
Y_solar = [7.0262E-01, 9.7479E-06, 6.8955E-02, 2.5000E-04, 7.8554E-05, 6.0144E-04, 8.1031E-05, 2.1513E-05]
S = ["H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Ne-20", "Mg-24"]
return rescale_composition(Composition(S, Y_solar), ZZs)
def init_netIn(temp: float, rho: float, time: float, comp: Composition) -> NetIn:
n : NetIn = NetIn()
n.temperature = temp
n.density = rho
n.tMax = time
n.dt0 = 1e-12
n.composition = comp
return n
def years_to_seconds(years: float) -> float:
return years * 3.1536e7
def main(save_show):
C = init_composition()
netIn = init_netIn(1.5e7, 160, years_to_seconds(10e9), C)
enigne_graph = GraphEngine(C, 5)
graph_blob = enigne_graph.constructStateBlob()
qse_engine = MultiscalePartitioningEngineView(enigne_graph)
qse_blob = qse_engine.constructStateBlob(graph_blob)
# 3e-8 and 1e-24 are the default tolerances we adopt as testing indicates it works well for
# main sequence evolution. We encorage researchers to trial various relative and
# absolute thresholds
# config = GridFireConfig()
# config.solver.pointSolver.trigger.boundaryFlux.relativeThreshold = 3e-8
# config.solver.pointSolver.trigger.boundaryFlux.absoluteThreshold = 1e-24
# solver = PointSolver(construct.engine, config)
solver = PointSolver(qse_engine)
solver_ctx = PointSolverContext(qse_blob)
stepLogger = StepLogger()
solver_ctx.callback = lambda ctx: stepLogger.log_step(ctx);
solver.evaluate(solver_ctx, netIn, False, False)
df = stepLogger.df
fig, axs = plt.subplots(2, 1, figsize=(17, 10), gridspec_kw={'hspace': 0, 'height_ratios': [1, 1]}, sharex=True)
t = np.linspace(df.t.min(), df.t.max(), 1000)
# Note we are not plotting Ne-20 as its molar abundance is so close to N-14 that it makes it hard to
# distinguish that species
PlottingSpecies = ["H-1", "He-3", "He-4", "C-12", "N-14", "O-16", "Mg-24"]
stable_index = 25
for sp in PlottingSpecies:
x = df.t[stable_index:]
y = df[sp][stable_index:]
axs[0].loglog(x, y)
axs[1].semilogx(x, np.gradient(y, x))
axs[0].text(x.iloc[0], y.iloc[0]*1.1, sp, fontsize=12)
axs[0].set_ylabel("$Y$ [mol/g]", fontsize=23)
axs[1].set_ylabel(r"$\frac{dY}{dt}$ [mol/g/s]", fontsize=23)
axs[1].set_xlabel("Time [s]")
ax_eps = axs[0].twinx()
ax_deps = axs[1].twinx()
ax_eps.set_ylabel(r"$\epsilon$ [erg/g/s]", rotation=270, labelpad=25, fontsize=23)
ax_deps.set_ylabel(r"$\frac{d\epsilon}{dt}$ [erg/g/s$^2$]", rotation=270, labelpad=25, fontsize=23)
ax_eps.axvline(2.276e17, color='grey', linestyle='dashed')
ax_deps.axvline(2.276e17, color='grey', linestyle='dashed')
ax_eps.loglog(df.t[stable_index:], df.eps[stable_index:], color='red', linestyle='dashed')
ax_eps.text(df.t[stable_index:].iloc[0], df.eps[stable_index:].iloc[0], r"$\epsilon$", fontsize=20)
ax_deps.semilogx(df.t[stable_index:], np.gradient(df.eps[stable_index:], df.t[stable_index:]), color='red', linestyle='dashed')
if save_show == ShowSave.SHOW:
plt.show()
else:
plt.savefig("smoothness_plot.pdf")
plt.close()
t = df.t.values
eps = df.eps.values
t1 = np.delete(t, [237])
eps1 = np.delete(eps, [237])
f_discon = interp1d(t, eps, bounds_error=False, fill_value='extrapolate')
f_smooth = interp1d(t1, eps1, bounds_error=False, fill_value='extrapolate')
ti = np.logspace(np.log10(t.min()), np.log10(t.max()), 1000)
cum_discon = trapezoid(f_discon(ti), ti)
cum_smooth = trapezoid(f_smooth(ti), ti)
rel_err = (cum_discon - cum_smooth) / cum_smooth
print(f"Relative Cummulative Energy Error: {rel_err:0.4E} ({cum_discon:0.4E} [erg/g] vs {cum_smooth:0.4E} [erg/g])")
if __name__ == "__main__":
import argparse
app = argparse.ArgumentParser(prog="Derivative Smoothness", description="Generate of view plots of derivative smoothness")
app.add_argument("-s", type=ShowSave, default=ShowSave.SHOW, choices=list(ShowSave), help="Whether to show or save the generated plot")
args = app.parse_args()
main(args.s)

View File

@@ -0,0 +1,20 @@
1.000000000000000000e+02
1.623776739188720910e+02
2.636650898730358108e+02
4.281332398719395655e+02
6.951927961775605809e+02
1.128837891684688429e+03
1.832980710832435534e+03
2.976351441631319176e+03
4.832930238571751943e+03
7.847599703514606517e+03
1.274274985703132188e+04
2.069138081114790111e+04
3.359818286283781345e+04
5.455594781168514601e+04
8.858667904100831947e+04
1.438449888287663052e+05
2.335721469090121391e+05
3.792690190732246265e+05
6.158482110660254257e+05
1.000000000000000000e+06

View File

@@ -0,0 +1,20 @@
7.516506915568591296e+01 7.495454650010640307e+01 7.474402243846603255e+01 7.453349751089608333e+01 7.432297205004131513e+01 7.411244626076269526e+01 7.390192026922430557e+01 7.369139415312440633e+01 7.348086796031343226e+01 7.327034172026007752e+01 7.305981545111255571e+01 7.284928916404744825e+01 7.263876286594782528e+01 7.242823656105262842e+01 7.221771025197236327e+01 7.200718394031474645e+01 7.179665762706986243e+01 7.158613131284748476e+01 7.137560499802309266e+01 7.116507868282796778e+01
2.411108827502363994e+01 2.390056655264294605e+01 2.369004306572022500e+01 2.347951849209244912e+01 2.326899324921361867e+01 2.305846759417556413e+01 2.284794168530916991e+01 2.263741562012273434e+01 2.242688945866676775e+01 2.221636323792333556e+01 2.200583698066780514e+01 2.179531070092636114e+01 2.158478440733700054e+01 2.137425810521942893e+01 2.116373179784977410e+01 2.095320548724562926e+01 2.074267917464953115e+01 2.053215286082669166e+01 2.032162654624836762e+01 2.011110023120477663e+01
1.588134936422530075e+01 1.567082305245526186e+01 1.546029673914114788e+01 1.524977042487612167e+01 1.503924411002547501e+01 1.482871779481417640e+01 1.461819147938077101e+01 1.440766516381058082e+01 1.419713884815615401e+01 1.398661253244984870e+01 1.377608621671159383e+01 1.356555990095366226e+01 1.335503358518361416e+01 1.314450726940610359e+01 1.293398095362399758e+01 1.272345463783906006e+01 1.251292832205238170e+01 1.230240200626462865e+01 1.209187569047621480e+01 1.188134937468739416e+01
1.240141625353471788e+01 1.219088993828025025e+01 1.198036362282025813e+01 1.176983730723369170e+01 1.155931099156918407e+01 1.134878467585666861e+01 1.131773574219467626e+01 1.131773344868713060e+01 1.131772427641642054e+01 1.131773274314408440e+01 1.131774381739029423e+01 1.131774099106536724e+01 1.131772792426291652e+01 1.131772001429877506e+01 1.131772477191268145e+01 1.131772951612228795e+01 1.131773269504887658e+01 1.131771867785366759e+01 1.131773274806228891e+01 1.131773618945088700e+01
1.221441707845048441e+01 1.221443092438587996e+01 1.221449268621868711e+01 1.221444340127185413e+01 1.221450447820550878e+01 1.221450450550565847e+01 1.221462299546512753e+01 1.221450541184946736e+01 1.221449741948761059e+01 1.221437937238299476e+01 1.221452600508989228e+01 1.221450974355266439e+01 1.221454206060290382e+01 1.221449405571979696e+01 1.221450465229765392e+01 1.221445641451824571e+01 1.221439900705651915e+01 1.221450373335646056e+01 1.221440168392089554e+01 1.221457855378897328e+01
1.285007977945194213e+01 1.285045003815560705e+01 1.284971632467318692e+01 1.284966729479870828e+01 1.284988825100139920e+01 1.284989877979216288e+01 1.284993556983961582e+01 1.284999283899156453e+01 1.285004027052133146e+01 1.285010185143070771e+01 1.284956549615072419e+01 1.285026985045980297e+01 1.285030607530261904e+01 1.284995107080426280e+01 1.285014040334820606e+01 1.284998296697936659e+01 1.285007370919554859e+01 1.284984313025256419e+01 1.285062579528359450e+01 1.284971304845847762e+01
1.332742493047810584e+01 1.332742517211232247e+01 1.332742546286364416e+01 1.332742938067791094e+01 1.332742349532839476e+01 1.332742474050093406e+01 1.332742419644821119e+01 1.332742815605684683e+01 1.332742784332937092e+01 1.332742638395014367e+01 1.332742810591535942e+01 1.332742545889716723e+01 1.332742617301116894e+01 1.332742856994030056e+01 1.332742872162519987e+01 1.332742428558021075e+01 1.332742647766768229e+01 1.332742921111014844e+01 1.332742941333290076e+01 1.332742293601148376e+01
1.347856888607090298e+01 1.347852064818079398e+01 1.347848605530894872e+01 1.347847130758353629e+01 1.347845366742022044e+01 1.347844421325814146e+01 1.347843780579964879e+01 1.347843881361396789e+01 1.347843249086560036e+01 1.347843051734689368e+01 1.347843088633933917e+01 1.347842621749463632e+01 1.347842953219780249e+01 1.347843032361080340e+01 1.347843437616470652e+01 1.347842800930953722e+01 1.347843104180701168e+01 1.347842687171626075e+01 1.347843078062061473e+01 1.347842862739984149e+01
1.278613012830484585e+01 1.278496693040220045e+01 1.278424783677001919e+01 1.278380390835387814e+01 1.278353105351237318e+01 1.278336397571877470e+01 1.278326064906533155e+01 1.278319580429599078e+01 1.278315656532500455e+01 1.278313425351781696e+01 1.278311894450724751e+01 1.278310725590280583e+01 1.278310351000193812e+01 1.278310038295341933e+01 1.278309778527155416e+01 1.278309542126202025e+01 1.278309502060797875e+01 1.278309434390479637e+01 1.278309469349433591e+01 1.278309355867858166e+01
1.203164561346398997e+01 1.202454099764239892e+01 1.202011655316218430e+01 1.201737235620566580e+01 1.201567535052035218e+01 1.201462713041880193e+01 1.201398075487848693e+01 1.201358208920642667e+01 1.201333643079369473e+01 1.201318530620892489e+01 1.201309211797299881e+01 1.201303460578686177e+01 1.201299925517803402e+01 1.201297778500763513e+01 1.201296402629015425e+01 1.201295602514885807e+01 1.201295092007369192e+01 1.201294776572451539e+01 1.201294578156225334e+01 1.201294445669623023e+01
1.147182662670111952e+01 1.144367788592245461e+01 1.142551981827195284e+01 1.141400530026558080e+01 1.140678324875124972e+01 1.140228496986332551e+01 1.139949510693226387e+01 1.139776944147619631e+01 1.139670393632033907e+01 1.139604667634546864e+01 1.139564144923076228e+01 1.139539174325708260e+01 1.139523790399952752e+01 1.139514313577533322e+01 1.139508469291570592e+01 1.139504883737335206e+01 1.139502665112581603e+01 1.139501305044587021e+01 1.139500464491563037e+01 1.139499946776214934e+01
1.112437351439633382e+01 1.104983178380758879e+01 1.099728149037341218e+01 1.096183216558747553e+01 1.093866818437858335e+01 1.092385480067567372e+01 1.091451366605461715e+01 1.090867559087929806e+01 1.090504724352872223e+01 1.090280005986856615e+01 1.090141130887915644e+01 1.090055422303433375e+01 1.090002569902901897e+01 1.089969993641441270e+01 1.089949919024657987e+01 1.089937554186619906e+01 1.089929936996127857e+01 1.089925246542425796e+01 1.089922356951065829e+01 1.089920576898874138e+01
1.097211035219207176e+01 1.084134486646048501e+01 1.073647503072117892e+01 1.065745474258099712e+01 1.060122323076019768e+01 1.056302031110026540e+01 1.053793274514885958e+01 1.052183628790108649e+01 1.051166454889254531e+01 1.050529886706545035e+01 1.050133929659283893e+01 1.049888572075516890e+01 1.049736892822251733e+01 1.049643262007837308e+01 1.049585516448912514e+01 1.049549921954909593e+01 1.049527988180654248e+01 1.049514477454431649e+01 1.049506154264506641e+01 1.049501027963316524e+01
1.094468724003418458e+01 1.077450225810061291e+01 1.062302102545175941e+01 1.049487461738804051e+01 1.039277793053667942e+01 1.031633777833742727e+01 1.026223795135402028e+01 1.022563538542393147e+01 1.020166848316140573e+01 1.018632073935404847e+01 1.017663415868663179e+01 1.017057683358957298e+01 1.016681091859345010e+01 1.016447806019511191e+01 1.016303616604842475e+01 1.016214619362716221e+01 1.016159734987274454e+01 1.016125906067995643e+01 1.016105061769337148e+01 1.016092220613375474e+01
1.097651335571714704e+01 1.078588796262477167e+01 1.060606838834930521e+01 1.044162196127209086e+01 1.029755509438241390e+01 1.017803438345032419e+01 1.008480316539240995e+01 1.001638286170760672e+01 9.968755779276024853e+00 9.936928619742690927e+00 9.916266013536175095e+00 9.903108546512575217e+00 9.894834222328734441e+00 9.889671762689555834e+00 9.886466743915320876e+00 9.884483084019732857e+00 9.883257691494989672e+00 9.882501603001433210e+00 9.882035421372110662e+00 9.881748119644676365e+00
1.103372098422792646e+01 1.083337479287353844e+01 1.063895056067691236e+01 1.045351601851197820e+01 1.028119637274276243e+01 1.012690577382375245e+01 9.995405386723385632e+00 9.889757307003176123e+00 9.810009639387388347e+00 9.753173954964635683e+00 9.714515406752507687e+00 9.689107699410834940e+00 9.672796915308628840e+00 9.662486085923703172e+00 9.656031896506101830e+00 9.652016722703004703e+00 9.649528481194510121e+00 9.647990171985121322e+00 9.647040550101658951e+00 9.646454868275412764e+00
1.110184168148390071e+01 1.089681287389299413e+01 1.069508339268978858e+01 1.049852259320839565e+01 1.030987990205894178e+01 1.013295385171165996e+01 9.972476778158801736e+00 9.833404330024478668e+00 9.719506474267035401e+00 9.631851532036685981e+00 9.568308558091725757e+00 9.524510461234727998e+00 9.495451254290198762e+00 9.476677471525505325e+00 9.464760761499684705e+00 9.457281885088901774e+00 9.452621649844495266e+00 9.449730720772761217e+00 9.447942336862459101e+00 9.446837908757437674e+00
1.117465294371354645e+01 1.096726403921578807e+01 1.076178538409097385e+01 1.055934766635116695e+01 1.036168711930753439e+01 1.017137900705320064e+01 9.992030295576467935e+00 9.828237222646533766e+00 9.685001477264883363e+00 9.566427510509075915e+00 9.474146500757882450e+00 9.406566347274901929e+00 9.359604550897161701e+00 9.328261330384671979e+00 9.307930342276915070e+00 9.294991305454903596e+00 9.286857258004600268e+00 9.281783457841839891e+00 9.278633925233053859e+00 9.276684775049567122e+00
1.124926881688630687e+01 1.104062677480477817e+01 1.083313911549921293e+01 1.062750533454747170e+01 1.042482367952639954e+01 1.022678380690943456e+01 1.003589927307875307e+01 9.855707138075906926e+00 9.690749489446384146e+00 9.546033914635952300e+00 9.425773144805688730e+00 9.331796860392312709e+00 9.262715545169019293e+00 9.214563489601520274e+00 9.182353621157208323e+00 9.161428359629866236e+00 9.148097719226344182e+00 9.139712116369935302e+00 9.134479294651772108e+00 9.131230233292356502e+00
1.132422755411165838e+01 1.111488579882806782e+01 1.090627085960638532e+01 1.069882940676268213e+01 1.049327112144276519e+01 1.029070998272206161e+01 1.009285865446117292e+01 9.902261421692060139e+00 9.722490357852789700e+00 9.558114979716174631e+00 9.414140564492795349e+00 9.294726311310403943e+00 9.201602599594787435e+00 9.133277722187754577e+00 9.085726396385924275e+00 9.053954438477180844e+00 9.033329695930307324e+00 9.020197187298318653e+00 9.011938899484631449e+00 9.006786577439486408e+00

Some files were not shown because too many files have changed in this diff Show More