Only if the game engine supports it. (HD 1080p, 2K, 4K…)
I tried that. But it is a bit more complicated because it is packed data and you also need to change in other places in the code what the new dimensions are.
Not to mention you would also need to redraw/upscale the walkable areas, z-planes,…
so what’s the likelyhood of the filtering happening in real-time as part of scummvm or dosbox etc?
Does it require a lot of cpu and gpu power?
I am afraid, this cannot be used as filter. The whole game needs to have pre-calculated art. It would be very slow.
You could use the cloud for the computations (like “Siri” analyzes the spoken sentences) or do the computations in a “pre-rendering” phase (where the player has to wait). In both cases ScummVM could cache the scaled images and reuse them.
Btw, what kind of GPU did you use for the calculations and what’s the order of magnitude of duration? (I.e., are we talking half a second, 2 seconds, 20, 200, even more?)
I just started this blog about ESRGAN.
I will add lots of info and pictures today; duration, interpolations, new models etc.
check this out
Very detailed, thanks!
But I’m still a bit unclear on the time frame. You write that it takes ~3 minutes for 100 interpolations, but then you say that processing 110 images to 110 000 interpolations took you 3 days, not a year. Then wouldn’t it be something more in the ballpark of 100 * 100 interpolations in ~3 minutes?
Could somebody apply ESRGAN upscaling to an Ultima 7 screenshot?
Would love to see how one of my favourite games ever would look with it.
Its not per image… its GPU frequency processing certain task. Every task makes the image more clear.
Its like impulses
I think, this image wont bring any improvements. This image is too boxy made by cubical parts and its very simple colored. You can get 2K resolution, but with same pixelated result.
Hmm, I see. Makes sense.
Too boxy & pixelated?
Perhaps if you train it using minecraft images?
If I understand correctly it’s trained using blurry downscaled photographs by default, so that lends itself better to an MI2 than the cleaner look of Ultima 7.
Note that the pretrained models are trained under the
MATLAB bicubickernel. If the downsampled kernel is different from that, the results may have artifacts.
Knowing that, I tried bicubic downscaling to make it a little blurrier before processing:
And libdepixelize for comparison:
thats correct… I believe your B-spline curve and Voronoi would look good with adventure games backgrounds. You should try that, I can give you original png images.
Ultima 7 is such a hard and specific image.
HQ4x is already a huge improvement there. have you checked that out?
Ah, that’s an Exult setting, right? Will have to look into it, I have only played Exult with the more classic pixel-precise upscaling so far.
Thanks for the tip!
Yes, an Exult setting. I’d like to have the time to replay U7, especially part 2…
I stopped my last playthrough of U7 (part 1) after acquiring the three prisms and following the trail of Elizabeth and Abraham to Buccaneer’s Den. It’s pretty linear from that point onwards, the world mostly explored, all side quests finished. So for lack of novelty, I did not continue. Since part 2 is much more story-driven (and I still remember the most important plot points), I never really mustered the energy to even begin a replay. Same deal with many adventure games, really. I mean, even Memoria I played only once.