Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hotfix: handle unexpected panic situations when fetching feeds #15

Merged
merged 5 commits into from
Feb 18, 2024

Conversation

taevel02
Copy link
Contributor

Hi.

I've noticed an unexpected error when fetching feeds after my contribution. You may remember the issue I mentioned in a previous PR, and I've fixed it. Below is the issue I was talking about.

Screenshot 2024-02-11 at 22 23 34

The workaround is as follows

  • Improved error handling: use the match syntax instead of unwrap() to return the appropriate error message when an error occurs, and use the Result type throughout the function to explicitly handle success and failure.

src-tauri/src/producer.rs Outdated Show resolved Hide resolved
src-tauri/src/producer.rs Outdated Show resolved Hide resolved
src-tauri/src/producer.rs Outdated Show resolved Hide resolved
src-tauri/src/worker.rs Show resolved Hide resolved
@taevel02
Copy link
Contributor Author

As your pointed out, the feed_ids parameter passed to the get_most_recent_items function should only contain feeds with fetch_old_items false. This is an important change to minimize unnecessary database lookups and make the code more efficient.

In my initial implementation, I looked up all feed IDs, but through your review process, I realized that this approach was inefficient. Therefore, I changed the logic to only look up recent items for feeds with fetch_old_items false. This significantly improves the performance of the code and reduces unnecessary processing.

I've included this change in this commit, and I believe it improves the overall readability and efficiency of the code. I'll continue to monitor for any other impact this change may have, and make additional fixes if necessary.

Thank you for your review and I look forward to hearing more feedback.

Comment on lines 58 to 61
if !filtered_items.is_empty() {
filtered_items.sort_by_key(|x| x.published_at);
inserted.extend(insert_new_items(db, feed, &filtered_items));
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about sorting the list at the end of the function?

+-- // Sort here    
|
|   for (feed, link, fetch_old_items) in pairs {
|       ...
|  
|       let filtered_items = ... // Now, you can make this variable immutable.
|  
|       // Since the `insert_new_items` function returns only inserted items, you can omit the empty checking.
|       inserted.extend(insert_new_items(db, feed, &filtered_items));
|   }
|      
+-> inserted.sort_by_key(|x| x.published_at);
    Ok(inserted)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good idea, but if you sort inserted at the end, if won't be sorted by latest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, do we need to sort the list at this point? The returned value inserted is only used for the notification (L27 of worker.rs), and the items stored in the database are sorted when it is read. Is there any reason to sort the filtered_items before inserting the value into the database?

I wrote this logic originally, but I think it might be worth considering now. Anyway, there's nothing wrong with the functionality, so let's merge it.

src-tauri/src/producer.rs Outdated Show resolved Hide resolved
@parksb parksb merged commit dd81686 into collie-reader:main Feb 18, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants