2025-06-16 13:15:42 -04:00
2025-06-11 11:41:55 -04:00
2025-03-19 11:36:59 -04:00
2025-06-17 11:29:07 -04:00

Logo

Stellar Evolution and Rotation in Four dimensions.

Build and Test

⚠️ This code is very early in development and should not be used for scientific purposes yet.


Introduction

SERiF is a stellar structure and evolution program written in C++. This README will eventually provide guidance on how end users (i.e. non SERiF developers) can use the code. However, due to the early stage of development we are in, this README is currently intended only for developers. Its purpose is to provide an overview of the build system, development philosphy, development process, and current state of the code. Further, general information tasks which need doing will also be included (though for more detailed information on this please refer to the issue tracker or the 4DSSE project board).

Building

SERiF uses meson as its build system. It may be useful to understand why we selected meson before we dive into detailed build instructions. The headline for "why meson" is ease of use for us. A primary goal of SERiF is that it should be easier to use than many current generation SSE code bases. This effectively means that we are looking for a "1 click" install process. Meson does not, out of the box, provide this. However, it does make it easier for us to build such an installation system around it.

In general all meson projects are built in a similar manner. The challenge, for end users, is often installation and linking of dependencies. Let us therefore first look at what depdendencies SERiF has.

Dependencies

There are only a small number of dependencies which must be installed by the user

  1. pip
  2. clang/gcc (clang >= 16, gcc >= 13)
  3. ninja
  4. meson

Further, if you use the mk script ninja and meson can even be installed for you!

There are a number of dependencies which are automatically installed by our build system. These are all stored in the subprojects

  1. MFEM - MFEM is a finite element modeling library developed primarily by a team at Lawrence Livermore National Labs. MFEM is used as our primary solver.
  2. opat-core - opat-core is a library for I/O operations related to the opat file format. All tables used by SERiF are in the opat-format. opat-core is maintained by the 4D-STAR collaboration and is very well intergrated with a meson build system.
  3. boost - boost is a C++ library which provides a number of useful utilities. It is used in SERiF for a number of tasks including file I/O and string manipulation. We use the ode solver from boost. Boost is complex to install and is a potential pain point. We will address this in more detail below.
  4. pybind11 - pybind11 is a library which allows us to create python bindings for C++ code. This is used to allow us to use SERiF in python.
  5. quill - quill is a C++ logging library. It is used to provide very fast logging functionality in SERiF.
  6. yaml-cpp - yaml-cpp is a C++ library for parsing YAML files. It is used to parse the configuration files used by SERiF.

All of these could be installed using either the system package manager, built from source, or simply manually included (in the case of header only libraries). However, we have chosen to make extensive use of the wrap system which meson provides to automatically fetch and build as many dependencies as we can. This significantly simplifies the build process for end users. Great care should be taken when adding any new dependencies to SERiF as we must maintain this ease of use.

Boost and other complex dependencies

Certain dependencies are either too complex or expensive to reasonably build with SERiF, therefore these must be installed system-wide. It is perfectly acceptable that a user might have these installed already; however, we do still want to maintain this same "one click setup" where users do not need to think about dependencies. Therefore, we also include a series of scripts which will automatically detect the system the user is on and install dependencies, such as boost, for them. Meson is then capable of detecting and using these system installations.

Compiling

For the moment let us assume that all "system" dependencies have been installed (i.e. boost. One can check if a dependency is a system dependency or not by looking for a .wrap file in the subprojects directory. If a .wrap file exists then meson will automatically handle this dependency and nothing need be done). In that case, we can use the standard meson build commands.

meson setup build
meson compile -C build
meson test -C build

meson uses an out of source build system so all build artifacts will be in the build directory. The first command above sets up the build directory and configures the build itself (if you are familar with cmake this is similar to running the cmake command). The second command compiles the code, using the ninja backend by default. The third command runs the tests. Note that this will only run the tests which are in the tests directory.

If you wish to run a test manually (i.e. without the meson test command) you must set the MESON_SOURCE_ROOT environment variable to the root directory of SERiF. Then you can navigate to SERiF/build/tests/<module_name>/<test_name> and run the test as you would any executable.

Note that this will automatically build all dependencies defined in the meson wrap system. This is a key feature of meson and is one of the reasons we selected it as our build system.

Building with the mk script

We also provide a mk script which is a wrapper around both the meson build system and the automatic dependency installation scripts we write. This is intended to be a "one click" install system. All the automatic installation scripts are stored in SERiF/build-config/<dependency_name>. You can then use the mk script to build SERiF. This will automatically check for all dependencies, try to install them if not found, fail over safely if they are not found, and then build the code if all dependencies are found. The mk script is intended to be the method with end users will use to build SERiF.

To use mk it is as simple as running mk in a bash compatible shell (currently tested on bash and zsh)

./mk

if you want to build with no test suite run. Note that this will neither run nor build the tests (meaning you will not be able to run them manually).

./mk --noTest

If you do not want to use the mk script you can also use the 4DSSEConsole which is just a simple bash script can help with building and debugging

./4DSSEConsole.sh

Finally, if you just want to use meson directly you can do the following

To not run tests

meson setup build
meson compile -C build

If you want to run tests

meson test -C build

Installing the python module

SERiF also provides a python interface. At the moment this is somewhat limited; however, we intend that it will eventually provide a full interface to the code. Installation is very easy and requires the same dependencies as the C++ code. From the root SERiF directory you can use pip to install the serif module.

pip install .

This will take a long time to run since this process configures, compiles, and links the entire C++ code base every time it is run.

If you are developing the python interface it makes more sense to use incremental builds. To do this you can a few pip flags

pip install -e . --no-build-isolation --vv

The first time you run this it will take just as long, but subsequent runs will be much faster.

Python Usage

Once the serif module is installed it is pretty straigt forward to use. Eventually, this will be even easier as currently we simply expose raw C++ functions and classes to python. This means that there is, at times, a bit of boilerplate needed.

Some Examples

Using the EOS module is a good example of the kinds of boilerplate needed.

from serif.eos import EOSInput
from serif.eos import get_helm_eos
from serif.eos import EOSio

eosfile = EOSio("./path/to/helm.dat")
helmTable = eosfile.getTable()

xmass = [0.75, 0.23, 0.02]
aion = [1.0, 4.0, 12.0]
zion = [1.0, 2.0, 6.0]

asum = 0
zsum = 0
for x, a, z in zip(xmass, aion, zion):
    asum += x/a
    zsum += (x*z)/a

q = EOSInput()
q.abar = 1.0/asum
q.zbar = q.abar * zsum
q.rho = 1.0e6
q.T = 1.0e8

r = get_helm_eos(q, helmTable);

print(r.ye, r.etot, r.sgas, ...)

Note how the serif module does not currently have any ability to use the resource manager, and instead a path must be manually provided. Eventually, we will add a resource manager to the python interface which will allow for easier access to resources.

Solving a n=1.5 polytrope is another good example. This demonstrates the deep integration with the C++ code as well as our dependencies (like MFEM)

from serif import config
from serif.polytrope import PolySolver

config.loadConfig('../../testsConfig.yaml')
n = config.get("Tests:Poly:Index", 0.0)

polytrope = PolySolver(n, 1)
polytrope.solve()

Note how simple this is to use, this is a very explicit design goal. A few caveats here are that we have not currently implemented a way to actually access the solution other than to have GLVis running when you run this code.

Test Configuration

Some tests use config variables setup in tests/testsConfig.yaml. Specifically for things like the GLVis host and port. You should configre those to point to whatever host you are running GLVis on.

Development Philosophy

A few core philosophies of SERiF development are

  1. Modularity
  2. Modernity
  3. Testability
  4. Usability
  5. Documentation

To briefly summarize the importance of each of these

Modularity

  1. All code is organized into modules. We aim to maintain the minimum number of modules possible while still enforcing that each module has a well-defined job. This is obviously somewhat nebulous so look at the current modules which exist to get a sense for how we distribute things. When adding a feature to the code first think carefully if it can or should exist in an already extant module.
  2. Within a module all code should maintain a very well documented and robust public interface. This can and will change dramatically during development but keep in mind that eventually we will likely want to "lock" our public interface so try to design it with that in mind.
  3. Careful thought should be given to what kind of module is being added. That is to say is it a: physics module, utility module, I/O module, infrastructure module, or something else. This will determine how and when this module is build during compilation. For example all physics modules should be built after all utility modules.

Modernity

  1. We use C++23 as the target language standard. This is a very intentional choice, and it means that code commited should not be written in older C++ standards where possible. Certainly C++98 code should be avoided.
  2. Some effects of this are that, for example, we do not allow raw pointer allocation using new or malloc. C++ provides very powerful tools which are easy to use and dramatically increase memory saftey (such as std::unique_ptr<Type>() or std::shared_ptr<T>()). If you ever find yourself wanting to allocate a raw pointer on the heap consider smart pointers instead.
  3. All of that being said, we do interface with external libraries which will sometimes return raw, C style pointers. In those cases, of course you need to do what needs be done. Just make sure to document things carefully and note in a comment that there is an enhanced possibility of a memory leak at those locations.

Testability

  1. It is unrealistic, and likely unproductive, to expect fully test driven development or 100% code coverage. However, we do want to maintain a high degree of testability. As such the recommended development approach is to develop against tests first. That is to say, instead of a single entry point binary, we have a testing module for each module. We run the code for each module through that testing module. Eventually, there will be entry points for users. However, that is very far down the line.
  2. While we do not enforce any specific amount of code coverage we do require that all modules have some degree of testing that a reasonable astronomer would look at and say "that looks well tested". This is a very subjective measure, but we will be using this as a guideline for code reviews.
  3. Testing for all C++ code is done using gtest. Make sure to be cognizant that floating point values will not be the same from machine to machine and therefore using EXPECT_EQ for floating point values will likely result in test failures. gtest provides tools for just this, use them.

Usability & Documentation

  1. SERiF is intended to be used by astronomers, not astronomers who are also developers. When developing new code or refactoring existing code keep this in mind. This means that public interface methods and functions should have very clear names (note that very clear does not mean long, it means clear). Every public method and function should include a doxygen comment which describes, at minimum, its purpose, inputs, and outputs. Ideally any exceptions which can be thrown should also be documented. Further, we strongly reccomend that every time a doxygen comment is written an example for that method, function or class is also written. This will give us a clear example set from the outset.
  2. There are times when complex code is required. In these cases that code should likely be private to the module.
  3. All modules will eventually be tied into the python interface. Write the public module with that in mind.

Current Status

There are two primary modules currently under construction, many which need to be worked on, and a few which have either been finished or are at least at their minimum viable state (MVS). Modules where the current assigne is marked as N/A are currently not being worked on. This does not mean that they do not need work or will not be worked on in the future. But if a new developer wants somewhere to start, those are good places too.

Name Key

  • A.D. : Aaron Dotter
  • E.B. : Emily Boudreaux
Module Name Status Description Current Assigne
config Complete This module handles all configuration files. It is responsible for parsing the YAML files N/A
composition MVS This module tracks a general composition object, allowing for arbitrary species tracking and mixing N/A
const Complete This module contains all physical constants from the CODATA 2022 data release as well as some astronomical constants from IAU 2015 N/A
eos MVS This module implements both a general interface for equation of states as well as specific equations of states. Currently the helmholtz equation of state is the only one implemented. N/A
meshIO Complete This module handles all I/O for the mesh. It is responsible for reading and writing the mesh to disk. This is used to interface with the resource manager which allows for easer configuration on the user end. N/A
misc WIP This is a catch all module for misc things. Generally we should avoid putting stuff in here but sometimes (such as for debugging macros) it is useful. N/A
network MVS This module handles the nuclear network and burning calculations. Currently only Frank Timmes' Approx8 network is implemented. It also implements a general interface for nuclear networks so that other networks can be added. N/A
opac WIP This module handles opacity calculation / interpolation. A.D.
polytrope MVS This module computes polytropic models which are used as initial states for solving the structure equations. E.B.
probe Complete This module implements the probe namespace which is used to hold functions for probing the current state of the code (stuff like whydt in MESA) N/A
python WIP This module contains all code relevant to the python interface. All interface code is then organized in submodules within this (such as python/config) E.B.
resource Complete This module handles loading resource from disk in a clean fashion. The key justification here is to avoid users having to explicitly set environmental variables but also to make loading of resources any where in the code easer to handle N/A
types Complete This module implements custom datatypes for SERiF which do not cleanly fall into any other module (i.e. datatypes should not go in misc) N/A

Future Work

This is a non-comprehensive list of things which still need to be done (and which are not being actively developed). If you pick up one of these projects go ahead and edit the README to mark these checkboxes as marked ([x])

  • Extended nuclear reaction network
  • Extend the equation of state module
  • Atmospheric boundary conditions
  • Mixing length
  • Magnetic fields
  • Rotation
  • Structure equations
  • Time stepping
  • Curvilinear finite elements
  • All sorts of details that I cannot even begin to enumerate at this stage

When thinking about picking up new modules it is important to think first about: one, what that work would depend on, and two, what work dependency on that. For example there would be very little point in working on time stepping until there is a stable structure solver. Whereas extending the nuclear network and equation of state can be extended independent of the primary solver (as is the case with the microphysics in general). If you want to start work and need some ideas about where to start reach out to Aaron or Emily.

Developing

There is a detailed development document which all 4D-STAR collaboration members should have access to. First familiarize yourself with that. I will note that we do not treat that as a 100% hard and fast rule. However, we do try to stick to that as a general rule. I will summarize some best practices which I find particularly helpful (outside of the actual source code itself)

git

All development should be done on your own fork of the repository. You can organize these forks in whatever way you want, they are your fork after all. However, I find it helpful to organize my branches into a few categories

  • [feature/<feature name>] - This is a new feature which I am working on. (i.e. git checkout -b feature/python/eos_interface)
  • [bugfix/<bug name>] - This is a bug which I am working on. (i.e. git checkout -b bugfix/python/eos_interface_memory_overflow)
  • [perf/<perf name>] - This is a performance improvement which I am working on. (i.e. git checkout -b perf/python/eos_interface_cache)

What I like about this is that it effectively creates a tree structure for my branches. However, as said before, on your own fork use any organizational scheme you find effective for yourself.

When you have finished writing code, and you have tested it open a pull request (PR) to the main branch for SERiF in the 4D-STAR organization. There is a template present for this which should automatically populate if you use the GitHub web interface. Not every PR will need to fill content in every field of the template, use your own judgment for this. Further, most PRs will, for sake of time, likely not be reviewed (major physics modules will be); however, the first PR by any new developer will need to be reviewed by one of the current experienced developers (Currently Aaron and Emily, though if and when new developers come on board with this part of the project that list will hopefully grow).

Code Style

Again for this primarily follow the style guide in the developer assets repository (owned by the 4D-STAR organization). In general though use clear and concise names for variables, functions, methods and classes. Adhear to consistent naming conventions (preferably also consistent with what is already in the code).

Note that we take an Object-Oriented Programming (OOP) approach to much of the code, but this is a pragmatic not a dogmatic choice. That is to say that OOP is sometimes a good tool for a job and other times it is not. Because of this we do have a good number of objects floating around but you, as a developer, should not feel as if you need to use an OOP design if you do not think it would be best. One thing that we will enforce is that overly complicated inheritance hierarchies are not allowed. If you find yourself writing a class which has more than 2 levels of inheritance then you should probably consider using composition instead.

Environment

The current team builds with both clang and gcc, we will maintain compatibility with both of these compilers. As such when developing new code you should, before opening a PR, test your compilation with gcc and clang. Meson makes this super easy as you can simply have two build directories (i.e. build-gcc and build-clang) and run the same commands in both.

CXX=g++ meson setup build-gcc
CXX=clang++ meson setup build-clang

Aside from this I find it useful to have the MESON_SOURCE_ROOT environmental variable set since this makes it easy to run individual executables. Therefore in your shell profile file (~/.bashrc, ~/.zshrc, etc.) add the following line

export MESON_SOURCE_ROOT=/path/to/SERiF

Other

Should you have any questions not answered here or in the development guidelines please feel free to reach out to Emily.

Description
No description provided
Readme 36 MiB
Languages
C++ 75%
Python 9.2%
Shell 7.3%
Meson 5.4%
C 2.8%
Other 0.3%