Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct way to use LFS? #541

Closed
petri-lipponen-suunto opened this issue Mar 16, 2021 · 2 comments
Closed

Correct way to use LFS? #541

petri-lipponen-suunto opened this issue Mar 16, 2021 · 2 comments

Comments

@petri-lipponen-suunto
Copy link

Hello,

I'm using LFS v.2.2.1 for writing large data streams into 1Gb NAND device. I've gotten the basic stuff to work surprisingly well (I only had experience on direct EEPROM access before), though I'm still struggling with some edge cases and I haven't found the information to continue in the documentation (or I haven't understood it correctly). My SPI/NAND layer code seems to work based on the few tests I've thrown at it, but I'm not 100% sure.

My basic pattern for storing the data is:

  1. lfs_file_opencfg the file with one lfs_attr which contains pointer to last modified time as uint32 UTC seconds
  2. every time I have some data, update the UTC uint32 variable and then call lfs_file_write with the data
  3. keep doing above until the data ends (or lfs_file_write returns error). If data ends, I call the lfs_file_sync followed by lfs_file_close.

Now the issues I'm trying to solve:

  1. If I enumerate the folder where I'm storing the files while the recording is ongoing, I'm getting lfs_getattr error: -61 (LFS_ERR_NOATTR). If I stop recording (lfs_file_sync followed by lfs_file_close) the lfs_getattr returns the correct UTC time. Does this mean I need to call lfs_file_sync manually every now and then to keep the file attribute updated? How often should it be called, i.e. how heavy operation are we talking here?

  2. If I keep data stream coming until device is full I'm getting error LFS_ERR_CORRUPT from the lfs_file_write. The following lfs_fs_size also returns "-84" same as lfs_dir_open when trying to read the folder contents. My SPI/NAND layer does not return error codes before or after this happens. I would assume that there is a way to detect full storage that doesn't involve corrupting it? What is the correct way to detect the filesystem full in this situation? Is the corruption related to the fact that I'm not calling lfs_file_sync during the writing?

Sorry for many questions and thank you for excellent little FS!

@petri-lipponen-suunto
Copy link
Author

I tried to do lfs_file_sync after every data update (about 100-200 bytes, cache size is 256 bytes), but that caused the whole device getting stuck after about a minute. After forced reset, the filesystem was empty for some reason. Most flushes took 5-7 ms but there were a couple that lasted >50ms which caused dropped data from sensor FIFO. So at least sync every update is out of the question...

@petri-lipponen-suunto
Copy link
Author

I'm closing this issue since the flushing behavior / slowness was explained by the comments in issue #344 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant