| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Developed by John Hable for use in Uncharted 2. Also used by Frictional
Games in SOMA. Originally inspired by a filmic tone mapping algorithm
created by Kodak.
From http://frictionalgames.blogspot.de/2012/09/tech-feature-hdr-lightning.html
|
|
|
|
|
| |
This is the canonical name for the algorithm. I simply didn't know it
before.
|
|
|
|
|
|
| |
Position the window around the original window center on video size change
(when switching to the next file with different resolution, for example)
instead of keeping the position of its top-left corner fixed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now have a video filter that uses the d3d11 video processor, so it
makes no sense to have one in the VO interop code. The VO uses it for
formats not directly supported by ANGLE (so the video data is converted
to a RGB texture, which ANGLE can take in).
Change this so that the video filter is automatically inserted if
needed. Move the code that maps RGB surfaces to its own inteorp backend.
Add a bunch of new image formats, which are used to enforce the new
constraints, and to automatically insert the filter only when needed.
The added vf mechanism to auto-insert the d3d11vpp filter is very dumb
and primitive, and will work only for this specific purpose. The format
negotiation mechanism in the filter chain is generally not very pretty,
and mostly broken as well. (libavfilter has a different mechanism, and
these mechanisms don't match well, so vf_lavfi uses some sort of hack.
It only works because hwaccel and non-hwaccel formats are strictly
separated.)
The RGB interop is now only used with older ANGLE versions. The only
reason I'm keeping it is because it's relatively isolated (uses only
existing mechanisms and adds no new concepts), and because I want to be
able to compare the behavior of the old code with the new one for
testing. It will be removed eventually.
If ANGLE has NV12 interop, P010 is now handled by converting to NV12
with the video processor, instead of converting it to RGB and using the
old mechanism to import that as a texture.
|
|
|
|
|
|
| |
This avoids a copy of the video image and lowers vsync jitter. Since
there are now two options to add to the window_attribs list, it has been
made dynamic.
|
|
|
|
|
|
|
|
|
|
| |
A lot of real-world shaders start off with comments explaining the usage
or license, generating lots of "empty" passes. This simply change allows
us to skip them, which silences the warning spam and prevents us from
having to store and copy around these empty passes.
It also adds a more useful failure check: Attempting to use a user
shader that doesn't define any passes at all.
|
|
|
|
|
|
| |
This requires the GL_EXT_texture_norm16 extension and works in ANGLE.
A default precision had to be set for sampler3Ds, otherwise the shaders
would fail to compile.
|
|
|
|
|
| |
ES shaders do not allow implicit conversion from int to float, which
is the most annoying ES anti-feature ever.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the opengl-hq option default that caused it not to autoselect
ANGLE (unlike --vo=opengl). Details see commit d5df90a2.
Back then the intention was to use ANGLE by default, since it integrates
much nicer with the Windows compositor (instead of native OpenGL, which
tends to cause crazy glitches). On the other hand, many opengl-hq
capabilities are not available with older ANGLE builds, so it didn't
make any sense to autoselect ANGLE for it.
With the GL_EXT_texture_norm16 extension recently added to ANGLE, it has
essentially reached feature parity to desktop GL for the subset we are
using. (Even the integer texture hack for high bit depth input could be
dropped now.)
It (probably) still does not support nnedi3, due to the weird way the NN
coefficients are imported. Also, it uses half-floats instead of 16 bit
fixed-point textures for technical reasons, which implies about 5 bits
of precision loss. If anyone actually manages to distinguish the two
dithering texture formats in a double-blind test, I will fix it.
|
|
|
|
|
|
|
|
|
| |
This must be called if a texture shared between D3D devices is updated.
Often enough, the shared devices will be the same device, but ANGLE
forces using shared surfaces. I suppose there is no guarantee the driver
will do the expected thing. Internally, the driver could for example not
insert the required barriers before the shared texture is used.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #320 (which is closed as 'not our problem' but eh)
Relevant xorg bug: https://bugs.freedesktop.org/show_bug.cgi?id=70931
For me this happened when (accidentally) trying to play a 8460x2812 jpg
file with mpv. Like the referenced bug, xvinfo reports "maximum XvImage
size: 8192 x 8192". So the returned XvImage is 8192x2812 and memory
corruption happens.
Only after handling this BadShmSeg X11 errors are shown.
|
|
|
|
| |
See previous commit.
|
|
|
|
|
|
|
|
| |
Rename it to get out of OpenGL's namespace. The gl_ prefix is used by
other mpv functions, but no OpenGL ones.
The "slice" parameter was never actually used, and all callers passed 0
for it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is actually that e first copy to a "staging" memory
frame, and then upload this at once. The old non-PBO code called
glTexsubImage2D for each OSD sub-bitmap.
The new non-PBO code path is a bit faster now if there are many small
sub-bitmaps (on Linux/nVidia). It's also a bit simpler, so this is a
win.
(Although I don't particularly appreciate the mixed normal/PBO texture
code.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some of these checks became pointless after dropping ES 2.0 support for
extended filtering.
GL_EXT_texture_rg is part of core in ES 3.0, and we already check for
this version, so testing for the extension is redundant.
GL_OES_texture_half_float_linear is also always available, at least as
far as our needs go.
The functionality we need from GL_EXT_color_buffer_half_float is always
available in ES 3.2, and we explicitly check for ES 3.2, so reject this
extension if the ES version is new enough.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For some reason, GLES has no glMapBuffer, only glMapBufferRange.
GLES 2 has no buffer mapping at all, and GL 2.1 does not always have
glMapBufferRange. On those PBOs remain unsupported (there's no reason to
care about GL 2.1 without the extension).
This doesn't actually work on ANGLE, and I have no idea why. (There are
artifacts on OSD, as if parts of the OSD data weren't copied.) It works
on desktop OpenGL and at least 1 other ES 3 implementation. Don't enable
it on ANGLE, I guess.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Not sure how much can be gained with this, as we can't use it properly
yet. For now, this is used only before rendering, which probably does
overwhelmingly nothing.
In the future, this should be used after temporary passes, which could
possibly reduce memory usage and even memory bandwidth usage, depending
on the drivers.
|
|
|
|
|
| |
Only log when actual extensions are loaded, never log anything about
builtins.
|
|
|
|
|
|
|
|
|
|
| |
Before that position of the window top-left corner was remaining the same
when the window was scaled.
Right now VOCTRL_SET_UNFS_WINDOW_SIZE is called only by window-scale. This
change will not affect resizes made by the user (dragging window edge).
Fixes #3164.
|
|
|
|
|
|
|
|
|
|
| |
Center the window on the original window center instead of the screen center
when the window has been resized due to requested window size exceeding the
size of the screen.
If user moved the window, he probably did it for the reason and he probably
don't want it to get back to the center of the screen when he is resizing it
(with window-scale for example).
|
|
|
|
|
|
|
|
| |
Properly update stored client area size when the window is resized in
reinit_window_state due to window size exceeding the size of the screen.
This was causing wrong behavior with window-scale - when window size was
becoming too big the window was resized but the video was not.
|
|
|
|
|
|
|
|
|
|
|
| |
Following commit 84ccebd9, the internal helpers don't allow GL_RGB and
GL_RGBA as internal formats for FBO attachments anymore.
While OpenGL itself is perfectly fine with it, I don't see much of a
reason to bother, and mixing sized and unsized internal formats is
confusing anyway.
Just remove these formats.
|
|
|
|
|
|
|
|
|
|
|
|
| |
ES 2.0 has this weird rule that not the internalformat parameter
determines the internal format, but the combination of all texture
parameters. GL_OES_texture_half_float thus does not specify e.g. a
GL_RGBA16F format, but requires passing GL_RGBA as format and
GL_HALF_FLOAT_OES as type. We won't bother with this, since ES 2.0 is a
lost cause anyway.
This also removes the OpenGL error when the code is trying to create a
f16 FBO for testing whether FBOs work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gl_video_upload_image() can fail in the hardware decoding case. In this
case rendering continued "normally", which meant that pass_get_img_tex()
would kill the process with an assertion failure.
Fix this by allowing gl_video_upload_image() to fail, and exit rendering
early enough to skip code which requires an image to be present. (Maybe
this is still a bit too subtle, but better than before.)
Set an error flag, and render the blue screen we introduced for shader
errors. (For this purpose also move the rendering of it to final output,
to ensure it's visible at all.) The error flag is temporary, because the
associated failure might also be temporary, unlike shader compilation
errors.
|
|
|
|
|
|
|
|
| |
ANGLE doesn't handle this very strictly. But if they change this in the
future, it shouldn't brick us.
Not quite happy with this glsl_extensions fields, but it is quite
unintrusive after all.
|
|
|
|
|
|
|
|
| |
No reason not to, and makes the following commit slightly simpler.
In fact, this makes the shaders more correct too. Normally, "#extension"
must come before any normal shader text, including the "precision"
directive. Not sure why this worked before. (Probably didn't.)
|
|
|
|
|
|
| |
ANGLE was missing texture() overloads in the shader compiler for
GL_TEXTURE_EXTERNAL_OES textures. Support has been added upstream,
so we can use it now.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the new hooks mechanism, user shaders and such are actually loaded
before rendering starts, instead of being loaded during rendering. This
is used to cache them (instead of e.g. reparsing them every frame).
The cached state wasn't cleared correctly in some situations. Namely,
resizing didn't correctly enable/disable prescale hooks.
Reorganize how these reinitializations are handled. Get rid of
reinit_rendering(), whose meaning was pretty unclear. Call the required
functions to reset or recreate state directly wherever they are needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
wscript builds hwdec_dxva2gldx.c if gl-dxinterop is enabled, while
video/dxva2.c depends on d3d-hwaccel. If d3d-hwaccel is disabled, then
hwdec_dxva2gldx.c will fail to link, because it uses
d3d9_surface_in_mp_image(), defined in dxva2.c.
Fix this by removing the use of this function. It has barely any value
at this point anyway. Just use the libavcodec documented way to get the
surface directly.
Fixes #3150.
|
|
|
|
| |
No reason to make it a special case.
|
|
|
|
|
|
|
|
|
|
| |
Use dynamic memory allocation, as the static allocation is starting to
get annoying.
Currently, SC_MAX_ENTRIES is essentially still a static upper limit on
the number of shaders. But in future we could try a more clever cache
replacement strategy, which does not keep stale entries forever if the
maximum happens not to be reached.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new uniforms introduced by 362015c have exceeded the uniform limit
when using high-radius tscale. In addition, the SC limit of 32 entries
might be pushing it with user shaders.
Just make these value a bigger to delay the onset of this same failure
mode. Maybe in the future it should be reworked to grow dynamically?
Either way, we *can* always predict a static upper bound on the number
of uniforms and shader cache entries, it's just that we forgot to do so.
Fixes #3151
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it so that users with actual HDR displays can just set their
config to target-trc=st2084 and get native HDR output. This will look a
bit silly for SDR content (everything will be really bright), but for
lack of a better tone mapping situation (including reverse tone mapping)
this is the easiest thing to do for now.
Ideally the brightness metadata should be part of the colorspace struct
or something (with mpv always adapting where necessary), but it depends
on the TRC and not the primaries so it's a bit more complicated than
that.
|
|
|
|
|
|
| |
Since dumb mode is affected by tone mapping (which I'll call a feature,
not a bug), we need to copy over the configuration - in particular, the
defaults. (To prevent a render failure)
|
|
|
|
|
|
|
|
|
|
|
| |
Since HDR content is now auto-detected as such, we should probably do
something smarter in the "no configuration" case, such as outputting
gamma 2.2 instead.
This decision will affect the majority of users of stock configurations
who just play back appropriately tagged HDR files, so having a good
default behavior is important. "Output the HDR content as-is" is
definitely not likely to give the user a good result.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Seems sensible.
Untested.
|
|
|
|
|
|
|
|
|
| |
Make it dynamic and never remove entries from it.
For now, this is better than possibly creating dangling pointers all
over the place in the gl_user_shader struct.
Untested.
|
| |
|
| |
|
|
|
|
| |
?
|
|
|
|
|
|
| |
GLES shaders disallow implicit conversion from int to float.
This has been broken for quite a while.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is now a configurable option, with tunable parameters.
I got inspiration for these algorithms off wikipedia. "simple" seems to
work pretty well, but not well enough to make it a reasonable default.
Some other notable candidates:
- Local functions (e.g. based on local contrast or gradient)
- Clamp with soft knee (linear up to a point)
- Mapping in CIE L*Ch. Map L smoothly, clamp C and h.
- Color appearance models
These will have to be implemented some other time.
Note that the parameter "peak_src" to pass_tone_map should, in
principle, be auto-detected from the SEI information of the source file
where available. This will also have to be implemented in a later
commit.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the way color management in mpv worked historically, the subtitle
blending function was written to preserve the linearity of the input.
(In the past, the 3DLUT function required linear inputs)
Since the 3DLUT was refactored to accept the video color directly, the
re-linearization after blending is now virtually always redundant.
(Notably, it's also redundant when CMS is turned off, so this way of
writing the code stopped making sense a long time ago. It is a remnant
from before the pass_colormanage function was as flexible as it is now)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, this relies on the user manually entering their display
brightness (since we have no way to detect this at runtime or from ICC
metadata). The default value of 250 was picked by looking at ~10 reviews
on tftcentral.co.uk and realizing they all come with around 250 cd/m^2
out of the box. (In addition, ITU-R Rec. BT.2022 supports this)
Since there is no metadata in FFmpeg to indicate usage of this TRC, the
only way to actually play HDR content currently is to set
``--vf=format=gamma=st2084``. (It could be guessed based on SEI, but
this is not implemented yet)
Incidentally, since SEI is ignored, it's currently assumed that all
content is scaled to 10,000 cd/m^2 (and hard-clipped where out of
range). I don't see this assumption changing much, though.
As an unfortunate consequence of the fact that we don't know the display
brightness, mixed with the fact that LittleCMS' parametric tone curves
are not flexible enough to support PQ, we have to build the 3DLUT
against gamma 2.2 if it's used. This might be a good thing, though,
consdering the PQ source space is probably not fantastic for
interpolation either way.
Partially addresses #2572.
|