Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates from master #3

Merged
merged 10 commits into from
Sep 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# The following owners will be the default owners for everything
# in the repo unless a later match takes precedence.
sportran/* @lorisercole
sportran_gui/* @rikigigi
sportran/ @lorisercole
sportran_gui/ @rikigigi
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ jobs:

strategy:
matrix:
python-version: ['3.6', '3.7', '3.8', '3.9', '3.10']
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11']

steps:
- name: Checkout
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ jobs:

strategy:
matrix:
python-version: ['3.6', '3.7', '3.8', '3.9', '3.10']
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11']

steps:
- name: Checkout
Expand Down
6 changes: 3 additions & 3 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ include LICENSE.txt
include setup.json
include sportran/README.md
include sportran/metadata.json
include sportran/utils/plot_style.mplstyle
exclude sportran/utils/blocks.py
exclude sportran/utils/blockanalysis.py
exclude sportran/utils/obsolete/
include sportran_gui/README_GUI.md
include sportran_gui/assets/icon.gif
include sportran_gui/assets/languages.json
include sportran/plotter/styles/api_style.mplstyle
include sportran/plotter/styles/cli_style.mplstyle
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ A code to estimate transport coefficients from the cepstral analysis of a multi-
https://sportran.readthedocs.io

### References
- [Ercole L., Bertossa R., Bisacchi S., and Baroni S., "_SporTran: a code to estimate transport coefficients from the cepstral analysis of (multivariate) current time series_", *arXiv*:2202.11571 (2022)](https://arxiv.org/abs/2202.11571), submitted to *Comput. Phys. Commun.*
- [Ercole L., Bertossa R., Bisacchi S., and Baroni S., "_SporTran: a code to estimate transport coefficients from the cepstral analysis of (multivariate) current time series_", *Comput. Phys. Commun.*, 108470](https://doi.org/10.1016/j.cpc.2022.108470), [*arXiv*:2202.11571 (2022)](https://arxiv.org/abs/2202.11571)
- (cepstral analysis) [Ercole, Marcolongo, Baroni, *Sci. Rep.* **7**, 15835 (2017)](https://doi.org/10.1038/s41598-017-15843-2)
- (multicomponent systems) [Bertossa, Grasselli, Ercole, Baroni, *Phys. Rev. Lett.* **122**, 255901 (2019)](https://doi.org/10.1103/PhysRevLett.122.255901) ([arXiv](https://arxiv.org/abs/1808.03341))
- (review) [Baroni, Bertossa, Ercole, Grasselli, Marcolongo, *Handbook of Materials Modeling* (2018)](https://doi.org/10.1007/978-3-319-50257-1_12-1) ([arXiv](https://arxiv.org/abs/1802.08006))
Expand Down
8 changes: 4 additions & 4 deletions setup.json
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
{
"name": "sportran",
"version": "1.0.0rc2",
"version": "1.0.0rc4",
"author": "Loris Ercole, Riccardo Bertossa, Sebastiano Bisacchi",
"author_email": "[email protected]",
"description": "Cepstral Data Analysis of current time series for Green-Kubo transport coefficients",
"license": "GPL 3",
"url": "https://github.com/sissaschool/sportran",
"keywords": "cepstral data analysis thermal conductivity transport coefficients physics green-kubo",
"python_requires": ">=3.6.*, <4",
"python_requires": ">=3.7, <4",
"classifiers": [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
Expand All @@ -42,7 +42,7 @@
],
"tests": [
"pytest>=5.1.0",
"pytest-regressions==2.2.0",
"pytest-regressions>=2.4.2",
"pandas>=1.1.0",
"testbook",
"ipykernel"
Expand Down
38 changes: 27 additions & 11 deletions sportran/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
try:
import sportran as st
except ImportError:
raise ImportError('Cannot locate sportran.')

Check failure on line 21 in sportran/analysis.py

View workflow job for this annotation

GitHub Actions / tests (3.10)

Cannot locate sportran.

Check failure on line 21 in sportran/analysis.py

View workflow job for this annotation

GitHub Actions / tests (3.10)

Cannot locate sportran.

Check failure on line 21 in sportran/analysis.py

View workflow job for this annotation

GitHub Actions / tests (3.10)

Cannot locate sportran.

Check failure on line 21 in sportran/analysis.py

View workflow job for this annotation

GitHub Actions / tests (3.11)

Cannot locate sportran.

Check failure on line 21 in sportran/analysis.py

View workflow job for this annotation

GitHub Actions / tests (3.11)

Cannot locate sportran.

Check failure on line 21 in sportran/analysis.py

View workflow job for this annotation

GitHub Actions / tests (3.11)

Cannot locate sportran.

from sportran.utils import log

Expand Down Expand Up @@ -130,7 +130,7 @@
input_file_group.add_argument('--split', type=int, default=1,
help='Build a time series with n*m independent processes (n is the number of processes of the original timeseries, m is the number provided with --split). The length of the new time series will be [original length]/m. (optional)')

lammps_group = input_file_group.add_argument_group('LAMMPS format settings')
lammps_group = parser.add_argument_group('LAMMPS input file format settings')
lammps_group.add_argument('--run-keyword', type=str,
help='Keyword that identifies the run to be read: a specific comment line placed just before the run command (only for "lammps" format)')
lammps_group.add_argument('--structure', type=str,
Expand Down Expand Up @@ -211,6 +211,18 @@
return 0


def concatenate_if_not_none_with_labels(concat, labels=None):
out_arr = []
out_label = ''
if labels is None:
labels = ['' for i in concat]
for arr, label in zip(concat, labels):
if arr is not None:
out_arr.append(arr)
out_label += f' {label}'
return np.concatenate([out_arr], axis=1).transpose(), f'{out_label}\n'


def run_analysis(args):

inputfile = args.inputfile
Expand Down Expand Up @@ -472,8 +484,8 @@

if not no_text_out:
outfile_name = output + '.psd.dat'
outarray = np.c_[j.freqs_THz, j.psd, j.fpsd, j.logpsd, j.flogpsd]
outfile_header = 'freqs_THz psd fpsd logpsd flogpsd\n'
outarray, outfile_header = concatenate_if_not_none_with_labels(
[j.freqs_THz, j.psd, j.fpsd, j.logpsd, j.flogpsd], ['freqs_THz', 'psd', 'fpsd', 'logpsd', 'flogpsd'])
np.savetxt(outfile_name, outarray, header=outfile_header, fmt=fmt)
if j.MANY_CURRENTS:
outfile_name = output + '.cospectrum.dat'
Expand All @@ -483,15 +495,17 @@
np.savetxt(outfile_name, np.column_stack([outarray.real, outarray.imag]), fmt=fmt)

outfile_name = output + '.cospectrum.filt.dat'
outarray = np.c_[j.freqs_THz,
j.fcospectrum.reshape(
(j.fcospectrum.shape[0] * j.fcospectrum.shape[1], j.fcospectrum.shape[2])).transpose()]
np.savetxt(outfile_name, np.column_stack([outarray.real, outarray.imag]), fmt=fmt)
if j.fcospectrum is not None:
outarray = np.c_[j.freqs_THz,
j.fcospectrum.reshape((j.fcospectrum.shape[0] * j.fcospectrum.shape[1],
j.fcospectrum.shape[2])).transpose()]
np.savetxt(outfile_name, np.column_stack([outarray.real, outarray.imag]), fmt=fmt)

if resample:
outfile_name = output + '.resampled_psd.dat'
outarray = np.c_[jf.freqs_THz, jf.psd, jf.fpsd, jf.logpsd, jf.flogpsd]
outfile_header = 'freqs_THz psd fpsd logpsd flogpsd\n'
outarray, outfile_header = concatenate_if_not_none_with_labels(
[jf.freqs_THz, jf.psd, jf.fpsd, jf.logpsd, jf.flogpsd],
['freqs_THz', 'psd', 'fpsd', 'logpsd', 'flogpsd'])
np.savetxt(outfile_name, outarray, header=outfile_header, fmt=fmt)

outfile_name = output + '.cepstral.dat'
Expand Down Expand Up @@ -623,7 +637,8 @@
"""Write old binary format."""
opts = {'allow_pickle': False}
optsa = {'axis': 1}
outarray = np.c_[self.j_freqs_THz, self.j_fpsd, self.j_flogpsd, self.j_psd, self.j_logpsd]
outarray, _ = concatenate_if_not_none_with_labels(
[self.j_freqs_THz, self.j_fpsd, self.j_flogpsd, self.j_psd, self.j_logpsd])
np.save(output + '.psd.npy', outarray, **opts)

if self.j_cospectrum is not None:
Expand All @@ -634,7 +649,8 @@
outarray = np.c_[self.j_freqs_THz, self.j_fcospectrum.reshape(-1, self.j_fcospectrum.shape[-1]).transpose()]
np.save(output + '.cospectrum.filt.npy', outarray, **opts)

outarray = np.c_[self.jf_freqs_THz, self.jf_psd, self.jf_fpsd, self.jf_logpsd, self.jf_flogpsd]
outarray, _ = concatenate_if_not_none_with_labels(
[self.jf_freqs_THz, self.jf_psd, self.jf_fpsd, self.jf_logpsd, self.jf_flogpsd])
np.save(output + '.resampled_psd.npy', outarray, **opts)

outarray = np.c_[self.jf_cepf_logpsdK, self.jf_cepf_logpsdK_THEORY_std, self.jf_cepf_logtau,
Expand Down
78 changes: 58 additions & 20 deletions sportran/i_o/read_lammps_log.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@
import numpy as np
from time import time
from sportran.utils import log
from io import BytesIO, StringIO


def is_string(string):
Expand All @@ -67,21 +68,51 @@ def is_vector_variable(string):
return bracket


def file_length(filename):
def _get_file_length(f):
i = -1
with open(filename) as f:
for i, l in enumerate(f, 1):
pass
for i, l in enumerate(f, 1):
pass
return i


def data_length(filename):
def file_length(file):
i = -1
if isinstance(file, bytes):
f = BytesIO(file)
i = _get_file_length(f)
f.close()
elif isinstance(file, str):
with open(file) as f:
i = _get_file_length(f)
else:
raise ValueError('Unsupported data type for file: {}'.format(type(file)))

return i


def _get_data_length(f):
i = 0
while is_string(f.readline().split()[0]): # skip text lines
pass
for i, l in enumerate(f, 2):
pass

return i


def data_length(file):
i = 0
with open(filename) as f:
while is_string(f.readline().split()[0]): # skip text lines
pass
for i, l in enumerate(f, 2):
pass

if isinstance(file, bytes):
f = BytesIO(file)
i = _get_data_length(f)
f.close()
elif isinstance(file, str):
with open(file) as f:
i = _get_data_length(f)
else:
raise ValueError('Unsupported data type for file: {}'.format(type(file)))

return i


Expand Down Expand Up @@ -119,7 +150,7 @@ class LAMMPSLogFile(object):

#############################################################################
Example script:
jfile = LAMMPSLogFile(filename, run_keyword='PRODUCTION RUN')
jfile = LAMMPSLogFile(data_file, run_keyword='PRODUCTION RUN')
jfile.read_datalines(NSTEPS=100, start_step=0, select_ckeys=['Step', 'Temp', 'flux'])
print(jfile.data)

Expand All @@ -129,16 +160,20 @@ class LAMMPSLogFile(object):
#############################################################################
"""

def __init__(self, filename, run_keyword=None, select_ckeys=None, **kwargs):
def __init__(self, data_file, run_keyword=None, select_ckeys=None, **kwargs):
"""
LAMMPSLogFile(filename, run_keyword, select_ckeys, **kwargs)
LAMMPSLogFile(data_file, run_keyword, select_ckeys, **kwargs)

**kwargs:
endrun_keyword [default: 'Loop time']
group_vectores [default: True]
GUI [default: False]
"""
self.filename = filename

if not isinstance(data_file, (bytes, str)):
raise ValueError('Unsupported dat type for file: {}'.format(type(data_file)))

self.data_file = data_file
self.run_keyword = run_keyword
self.select_ckeys = select_ckeys
self.endrun_keyword = kwargs.get('endrun_keyword', 'Loop time')
Expand All @@ -152,14 +187,14 @@ def __init__(self, filename, run_keyword=None, select_ckeys=None, **kwargs):
global FloatProgress, display

self._open_file()
self.MAX_NSTEPS = file_length(self.filename)
self.MAX_NSTEPS = file_length(self.data_file)
self._read_ckeys(self.run_keyword, group_vectors)
self.ckey = None
return

def __repr__(self):
msg = 'TableFile:\n' + \
' filename: {}\n'.format(self.filename) + \
' data_file: {}\n'.format(self.data_file) + \
' all_ckeys: {}\n'.format(self.all_ckeys) + \
' select_ckeys: {}\n'.format(self.select_ckeys) + \
' used ckey: {}\n'.format(self.ckey) + \
Expand All @@ -169,10 +204,13 @@ def __repr__(self):

def _open_file(self):
"""Open the file."""
try:
self.file = open(self.filename, 'r')
except:
raise ValueError('File does not exist.')
if isinstance(self.data_file, bytes):
self.file = StringIO(BytesIO(self.data_file).read().decode('utf-8'))
elif isinstance(self.data_file, str):
try:
self.file = open(self.data_file, 'r')
except:
raise ValueError('File does not exist.')
return

def _read_ckeys(self, run_keyword, group_vectors=True):
Expand Down
Loading
Loading