Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharpness of the restored face #25

Open
andyderuyter opened this issue Sep 2, 2022 · 14 comments
Open

Sharpness of the restored face #25

andyderuyter opened this issue Sep 2, 2022 · 14 comments

Comments

@andyderuyter
Copy link

andyderuyter commented Sep 2, 2022

Is there a way for the fixed face to be sharper in the results? As you can see in the fixed result, the transition to the sharp hair on the top of her head is pretty harsh and the overall sharpness of the face is greatly reduced compared to the sharpness of the input image.

Input:

SinCity_Mulan_full_body_Disney_princess_from_Mulan_character_be_727fc4b2-be79-44be-9b3c-f7cd8f1877e5

Fixed with CodeFormer.
This is with 0.5 fidelity and with background upscale on:

SinCity_Mulan_full_body_Disney_princess_from_Mulan_character_be_727fc4b2-be79-44be-9b3c-f7cd8f1877e5

Sharp area (top) going to unsharp area below (face fixed):

Screenshot at Sept 02 08-48-06

Thanks for looking into this :)

@MarcusAdams
Copy link

I can confirm. The result is less sharp. in the restored area.

@sczhou
Copy link
Owner

sczhou commented Sep 2, 2022

The input created image has a large resolution of 1536x1024 which is beyond our model with an output face of 512. Since the face restoration models are originally designed for the restoration of real low-quality faces, which usually have a very lower resolution than 512. Thus most models fixed the input and output resolution of 512, our CodeFormer is the same.

This is why the result is less sharp when inferencing an image with a very larger resolution than 512.

@andyderuyter
Copy link
Author

Ok, so downscaling the image first to around 512px (max width) is an option then?

@andyderuyter
Copy link
Author

I just tried with a resized image of the same picture. 512px wide. This is the result:

SinCity_Mulan_full_body_Disney_princess_from_Mulan_character_be_727fc4b2-be79-44be-9b3c-f7cd8f1877e5

It remains unsharp in the restored face.

@kalkal11
Copy link

kalkal11 commented Sep 2, 2022

The whole dataset was 512*512 you'll only ever get an output that resolves a certain level of detail/sharpness as a result of that. I'm not sure if the Devs ever plan on releasing a higher resolution model but that will require substantially more vram potentially. Bear in mind that we are using CodeFormer outside of its original intended purpose when using it for AI art, the optimisations made were a reaction to good community feedback in relation to results. Would I want sharper results too? Sure but you have to be realistic about the tools at hand at the same time.

@andyderuyter
Copy link
Author

andyderuyter commented Sep 2, 2022

Don't know if it's possible, as I'm not fluent in Python... But how about some sharpening levels (kind of like the fidelity slider) that happen after the face restoration (only on the part that is restored) and before that restored part is being pasted back onto the image?

PS: I do appreciate the answers and feedback, thanks for that! :)

@sczhou
Copy link
Owner

sczhou commented Sep 4, 2022

Hi all @andyderuyter @MarcusAdams @caacoe, I add the face upsampling '--face_upsample' option for high-resolution AI-created faces. Please have a try!
e.g.,
python inference_codeformer.py --w 0.7 --test_path inputs/user_upload --bg_upsampler realesrgan --face_upsample

@sczhou
Copy link
Owner

sczhou commented Sep 4, 2022

The result of using --face_upsample

0000-up

@MarcusAdams-v006200
Copy link

@sczhou , nice work. I look forward to trying it out! Thank you so much!

@kalkal11
Copy link

kalkal11 commented Sep 4, 2022

@sczhou well that was certainly a 'hold my beer' moment. Thank you!

@andyderuyter
Copy link
Author

@sczhou Thanks, this was much needed and the AI community will also be thankful as well!

@MarcusAdams
Copy link

MarcusAdams commented Sep 5, 2022

@sczhou I'm not seeing a difference. I tried --face_upsample with weight 1.0 and weight 7.0, with both --upscale 1 and --upscale 2, but I can't discern a difference between the two images. I even tried with reducing the size of the images first. I wonder if the right code got checked in.
These are weight 0.7 with --upscale 2, first no --face_upsample, then with:
Small_CodeFormer_0 7_2x
Small_CodeFormer_0 7_2x_upsampled

@sczhou
Copy link
Owner

sczhou commented Sep 5, 2022

@sczhou I'm not seeing a difference. I tried --face_upsample with weight 1.0 and weight 7.0, with both --upscale 1 and --upscale 2, but I can't discern a difference between the two images. I even tried with reducing the size of the images first. I wonder if the right code got checked in. These are weight 0.7 with --upscale 2, first no --face_upsample, then with:

Hi, please make sure you used --face_upsample and --bg_upsampler realesrgan together in the command, since the face upsampler is initialed by the same realesrgan of background image.


Update:

  • --face_upsample can be used solely now.

@sczhou sczhou closed this as completed Oct 5, 2022
@sczhou sczhou reopened this Oct 9, 2022
@TechVillain
Copy link

--face_upsample can be used solely now.

@sczhou deserves a Nobel peace prize for this. It changes the human history.

Most AI arts today are in high resolution. Being able to upscale only the face without affecting the background resolution, is crucial and critical.

It's December 2023 now. Why have not the other players, namely GFPGAN, GPEN, RestoreFormer, come up with this brilliant idea?

Why @sczhou is the only person in the AI community offering this cutting edge piece of tech?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants