I am writing some custom code to interpolate existing .pyfrs
files onto a set of user-defined coordinates defined as a numpy array. I’ve been able to get it to work for a single pyfrs file by using a slightly modified SamplerPlugin
class. Basically, the workflow involves creating a new integrator with the modified sampler plugin, and then running
interpolation_results = my_integrator.completed_step_handlers[my_plugin_index](my_integrator)
Now, I want to extend this. I want to be able to take a single mesh, and repeat the same procedure for a large quantity of .pyfrs
files. I could theoretically do this by initializing a new integrator every single time, but in practice I want to interpolate the solution to a very large # of points (>100k) and the vast majority of the time is spent initializing the integrator if I do that. How can I instead keep the same integrator instance but load a new initial condition from a .pyfrs
file? Basically, what I want to be able to do is something like this:
def get_interpolated_soln(solver,plugin_idx=-1):
#my plugin returns a numpy array of velocities etc at interpolation pts
return solver.completed_step_handlers[plugin_idx](solver)
solns = [NativeReader(path) for path in pyfrs_paths]
interpolated_solns = []
#the solver loads my plugin
solver = get_solver(backend, rallocs, mesh, solns[0], cfg)
interpolated_solns.append(get_interpolated_soln(solver))
for soln in solns[1:]:
#### This is the part I'm struggling with
solver.load_initial_condition(soln)
####
interpolated_solns.append(get_interpolated_soln(solver))
I also tried a previously suggested approach involving exporting the .pyfrs
files to ParaView format and using the ParaView Python API, however it turned out to be extremely slow for large quantities of points. If I can get this working, since it requires no additional dependencies, I can also open a pull request to the PyFR repository, since sampling specific points after the solution is written out is currently not a supported use case in PyFR.