| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply basic transformations like rotation by 90° and mirroring when
sampling from the source textures. The original idea was making this
part of img_tex.transform, but this didn't work: lots of code plays
tricks on the transform, so manipulating it is not necessarily
transparent, especially when width/height are switched. So add a new
pre_transform field, which is strictly applied before the normal
transform.
This fixes most glitches involved with rotating the image.
Cropping and rotation are now weirdly separated, even though they could
be done in the same step. I think this is not much of a problem, and
has the advantage that changing panscan does not trigger FBO
reallocations (I think...).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit refactors the 3DLUT loading mechanism to build the 3DLUT
against the original source characteristics of the file. This allows us,
among other things, to use a real BT.1886 profile for the source. This
also allows us to actually use perceptual mappings. Finally, this
reduces errors on standard gamut displays (where the previous 3DLUT
target of BT.2020 was unreasonably wide).
This also improves the overall accuracy of the 3DLUT due to eliminating
rounding errors where possible, and allows for more accurate use of
LUT-based ICC profiles.
The current code is somewhat more ugly than necessary, because the idea
was to implement this commit in a working state first, and then maybe
refactor the profile loading mechanism in a later commit.
Fixes #2815.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has been completely broken since commit 93546f0c. But even before,
rotation handling did not make too much sense. In particular, it rotated
the contents of the cropped image, instead of adjusting the crop
rectangle as well. The result was that things like panscan or zooming
did not behave as expected with rotation applied.
The same is true for vertical flipping. Flipping is triggered by
negative image stride. OpenGL does not support flipping the image on
upload, so it's done as part of the rendering. It can be triggered with
--vf=flip, but other filters and even decoders could setup negative
stride to flip the image.
Fix these issues by applying transforms to texture coordinates properly,
and by making rotation and flipping part of these transforms.
This still doesn't work properly for separated scaling. The issue is
that we'd have to adjust how the passes are done. For now, pick a very
stupid solution by rotating the image to a FBO, and then scaling from
that. This has the avantage that the scale logic doesn't have to be
complicated for such a rare case. It could be improved later.
Prescaling is apparently still broken. I don't know if chroma
positioning works properly either. None of this should affect the case
with no rotation.
|
|
|
|
|
|
| |
Since prescale now literally only affects the luma plane (and the
filters are all designed for luma-only operation either way), the option
has been renamed and the documentation updated to clarify this.
|
|
|
|
|
| |
There was no real point in hard-coding these all over the place,
especially since the order was sort of arbitrary and confusing.
|
|
|
|
|
| |
The previous approach was too naive, and can e.g. ruin playback if
scheduling switches e.g. between 1 and 2 vsync per frame.
|
| |
|
|
|
|
|
| |
This is a mistake coming from commit 6ef06aa1: it accidentally changed
the license from GPL/LGPL dual to GPL only.
|
| |
|
|
|
|
|
|
| |
Define a macro to correct the coordinate for lookup texture. Cache
the corrected coordinate for 1D filter and use mix() to minimize the
performance impact.
|
|
|
|
| |
The old name was stupid. Very stupid.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement NNEDI3, a neural network based deinterlacer.
The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.
The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.
We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.
For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
the shader code will be regenerated for each frame, it's on CPU
though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
implementation with some bit tricks (probably with the help of the
intBitsToFloat function).
The code is tested with nvidia card and driver (355.11), on Linux.
Closes #2230
|
|
|
|
|
|
|
|
|
|
|
| |
Add the Super-xBR filter for image doubling, and the prescaling framework
to support it.
The shader code was ported from MPDN extensions project, with
modification to process luma only.
This commit is largely inspired by code from #2266, with
`gl_transform_trans()` authored by @haasn taken directly.
|
|
|
|
| |
The source shader was removed after deband was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This turns the old scalers (inherited from MPlayer) into a pre-
processing step (after color conversion and before scaling). The code
for the "sharpen5" scaler is reused for this.
The main reason MPlayer implemented this as scalers was perhaps because
FBOs were too expensive, and making it a scaler allowed to implement
this in 1 pass. But unsharp masking is not really a scaler, and I would
guess the result is more like combining bilinear scaling and unsharp
masking.
|
|
|
|
| |
At least one thing the current option code can do right.
|
|
|
|
|
| |
This was redundant to forcing the value with vf_format, so the vo_opengl
sub-option was removed. This field is just a leftover.
|
|
|
|
|
|
|
|
|
|
| |
The removal of source-shader is a side effect, since this effectively
replaces it - and the video-reading code has been significantly
restructured to make more sense and be more readable.
This means users no longer have to constantly download and maintain a
separate deband.glsl installation alongside mpv, which was the only real
use case for source-shader that we found either way.
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly to cut down somewhat on the amount of code bloat in
video.c by moving out helper functions (including scaler kernels and
color management routines) to a separate file.
It would certainly be possible to move out more functions (eg. dithering
or CMS code) with some extra effort/refactoring, but this is a start.
Signed-off-by: wm4 <wm4@nowhere>
|
|
This is a bit redundant with the name of the directory itself, and not
in line with existing naming conventions.
|