Skip to content

How Do I

Greg Sjaardema edited this page Apr 23, 2024 · 10 revisions

How Do I

Differences in variable(s) between two meshes.

We have a user here with 2 exodus format results files with the same mesh, who wants to subtract the 1 or more variables and produce a new results file with the difference.

  • A possible option would be to use exodiff [options] file1.exo file2.exo diffile.exo. The data in all results variables is the results of the variables in file1.exo - the variables in file2.exo I think you have to make sure that absolute tolerances are used to get T1 – T2 instead of a relative diff…
  • Another option is to use the exo2mat program on both files to convert the data to matlab format. Then bring both files into matlab, subtract the variable(s) and either use those results or write out the results to a file and then use mat2exo to get an exodus file with the differences in it.

Create a sideset covering all external faces in my mesh

  • io_shell --boundary_sideset {infile} {outfile}
  • The output mesh will have a sideset named boundary which includes all exposed faces of the input mesh.

Apply displacements at a particular step to the coordinates?

  • Grepos -- DEFORM STEP {step#}
  • The displacements at the specified step will be added to the coordinates of the model and then the displacements at that step will be zeroed. All other displacements and fields will be left as is.

Delete all elements with a specified field value?

  • Algebra -- FILTER ELEMENT {variable} {LT|LE|EQ|GE|GT} {value} TIME {time_step}

Delete node or element maps such that all ids range from 1 up to the number of nodes or elements.

  • Grepos -- DELETE {NODE|ELEMENT} MAP

Convert structured mesh to unstructured mesh

  • struc_to_unstruc {input.cgns} {output.exodus}
  • Input currently is limited to CGNS which is the only current input format which supports structured mesh.
  • Output is Exodus, but could easily be modified to permit unstructured CGNS output.
  • It is primarily a tool that I used to test my CGNS reader/writer during development, but it should work for most CGNS structured meshes. Parallel is still somewhat a work in progress, but serial should work.

get a list of the times with better precision (i.e. 1729.34124) than the “explore” “list times” command gives me:

  • In Explore -- precision {low|normal|high|#digits}
  • This also controls the precision of field value output and other floating point data (coordinates, ...)

Get higher precision with aprepro

  • Run aprepro with the --full_precision and it will output as many digits as possible such that the printed number matches the internal in-memory value.

strip out all but the last solution from an exodus file?

  • ejoin – steps LAST
  • grepos … delete step 1 to 29
  • algebra … save all<newline>nintv 1
  • epu -- epu -auto -steps LAST {filename.##.00} (If files are decomposed for parallel)
  • io_shell -- io_shell --Minimum_Time {time}

Delete all timesteps from an exodus file

  • io_shell --delete_timesteps {in_file} {out_file}

write an input file that can be read by {codename}.

  • Several of the seacas codes are typically run interactively and do not provide a command file option for reading the commands from a file. Examples of these codes are gjoin, grepos, explore. You can put the commands that you would enter interactively into a file and then redirect standard input from that file and it will then read the commands from the provided file instead of interactively.
  • You can redirect standard input from a file:
    • {codename} {mesh_input_file.exo} {optional_output_file.exo} < {input_file}
  • grepos input.exo output.exo < commands.txt
  • explore input.exo < command_exp.txt

Determine which element blocks are adjacent (share nodes) or which sidesets touch which element blocks?

  • io_info can do this: io_info --adjacencies {filename}
  • The output will show which blocks and surfaces are adjacent to which other blocks/surfaces.

Speed up a slow EPU run

For a slow-running epu. If the files are distributed over lots of processors (e.g. more than 1024), then the issue may be that many compute systems will not allow a process to have more than 1024 files open at any one time. In this case, epu has to repeatedly open and close each of the files each time it reads data from a specific file instead of the normal operation where it keeps all files open at any one time.

If there is a message to the output saying something like

Single file mode... (Max open = {max_files}, consider using the -subcycle option for faster execution...

then this is most likely the case. The best option in this case is to run epu in “subcycle” mode. In this mode, epu only combines a portion of the files each time and then runs again combining the output from each of the “portions”. For example, if you are running on 10,000 processors, you would do something like:

epu –auto –subcycle=10 –join_subcycles file.e.10000.00000

Epu would then join

  • file.e.10000.00000..file.e.10000.00999 into file.e.10.00
  • file.e.10000.01000..file.e.10000.01999 into file.e.10.01
  • ...
  • file.e.10000.09000..file.e.10000.09999 into file.e.10.09
  • file.e.10.00 .. file.e.10.09 into file.e

Although this results in double the amount of data read and written, it is often much faster due to epu being able to keep the files open continuously instead of doing repeated system calls to open and close the files.

EPU now has the behavior that if the auto option is used and the number of files to be combined exceeds the open file limit on the system, EPU will automatically invoke the -subcycle and -join_subcycle options without the user or script needing to specify them.

For example, if the command is epu -auto file.e.4096.0000 and the maximum open file count on the system is 1020, then epu will automatically run epu -auto -subcycle {cycle_count} -join_subcycle file.e.4096.0000.

This will make the job run significantly faster.

The other modification is that if epu is built in a parallel build and is run via "mpirun" or "mpiexec", then epu will behave as if the -subcycle {np} option were specified and will run {np} joins simultaneously followed by a single run on processor 0 to join the files output from the subcycle runs.

Change the timestep time on a large database

I have a large mesh with a single time stamp with a time value of 0.05. I would rather have the initial time be 0.0 to make an initial condition restart simpler. What is the easiest way to change the exodus result from time_whole = 0.05 to time_whole = 0.0? I would rather not copy the entire file if possible.

The simplest method is using the exodus.py Python Exodus interface:

#!/usr/bin/env python
import exodus
 
e = exodus.exodus('your_filename_here', array_type = 'ctype', mode=’a’)
e.put_time(1, 0.0) #argument is step number (1-based) and time at that step.
e.close()

This will update the file in place so is efficient for large files.

You can also use io_modify which has options to scale and offset the timestep times in a file. It will also modify the file in place instead of writing a new file.

Clone this wiki locally