| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous few commits changed sd_lavc.c's output to packed RGB sub-
images. In particular, this means all sub-bitmaps are part of a larger,
single bitmap. Change the vo_opengl OSD code such that it can make use
of this, and upload the pre-packed image, instead of packing and copying
them again.
This complicates the upload code a bit (4 code paths due to messy PBO
handling). The plan is to make sub-bitmaps always packed, but some more
work is required to reach this point. The plan is to pack libass images
as well. Since this implies a copy, this will make it easy to refcount
the result.
(This is all targeted towards vo_opengl. Other VOs, vo_xv, vo_x11, and
vo_wayland in particular, will become less efficient. Although at least
vo_vdpau and vo_direct3d could be switched to the new method as well.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, subtitle renderers could export SUBBITMAP_INDEXED, which is a
8 bit per pixel with palette format. sd_lavc.c was the only renderer
doing this, and the result was converted to RGBA in every use-case
(except maybe when the subtitles were hidden.)
Change it so that sd_lavc.c converts to RGBA on its own. This simplifies
everything a bit, and the palette handling can be removed from the
common code.
This is also preparation for making subtitle images refcounted. The
"caching" in img_convert.c is a PITA in this respect, and needs to be
redone. So getting rid of some img_convert.c code is a positive side-
effect. Also related to refcounted subtitles is packing them into a
single mp_image. Fewer objects to refcount is easier, and for the libass
format the same will be done. The plan is to remove manual packing from
the VOs which need single images entirely.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply the padding internally to each input bitmap, instead of requiring
this for the semi-public API.
Right now, everything still uses packer_pack_from_subbitmaps() to fill
the input bitmap sizes, but that's going to change with the following
commit. Since bitmap_packer.in is mutated during packing anyway, it's
more convenient to add the padding automatically.
Also, guarantee that every sub-bitmap has a padding border around it.
Don't let the padding overlap. Add padding even on the containing
borders.
This is simpler, doesn't cost much in memory usage, and is convenient
for one of the following commits.
|
|
|
|
|
|
| |
This is the ES equivalent to ARB_timer_query. It enables the performance
timers on ANGLE. All the added functions should be identical in
semantics to their desktop GL equivalents.
|
|
|
|
|
|
|
|
|
| |
The OpenGL 3.0+ and ES specs are quite clear on what values are
accepted for the attachment object name parameter. And there's no
overlap for the default framebuffer. Sigh.
Probably fixes Mesa raising an error in this case and might fix #3251.
Regression by the previous vo_opengl change.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we've used system-specific API (GLX, EGL, etc.) to retrieve
the depth of the default framebuffer. (We equal this to display depth
and use the determined depth for dithering.)
We can actually retrieve this value through standard GL API, and it
works everywhere (except GLES 2 of course). This simplifies everything a
great deal.
egl_helpers.c is empty now. But I expect that some EGL boilerplate will
be moved to it, so don't remove it yet.
|
|
|
|
|
|
|
|
| |
This was somehow done under the assumption that ANGLE would somehow
always use RG in ES2 mode. But there's no basis for this. Even if ANGLE
supports NV12 textures with drivers that do not allow for texture_rg,
this cas eis too obscure to worry about. So do the robust and correct
thing instead, and disable this code if texture_rg is not available.
|
|
|
|
|
|
|
| |
Commit 74e3d11 resulted in the background overlay not getting destroyed
when mpv quits. Add back a piece of code that was removed in that commit
to restore correct functionality.
Fixes issue #3100
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a common idiom used in MSDN docs and Raymond Chen's example
programs to get a HINSTANCE for the current module, regardless of
whether it's an .exe or a .dll. Using GetModuleHandle(NULL) for this is
technically incorrect, since it always gets a handle to the .exe, even
when the executing code (in libmpv) is running in a .dll. In this case,
using the wrong HINSTANCE could cause namespace issues with window
classes, since CreateWindowEx uses the HINSTANCE to search for the
matching window class name.
See:
https://blogs.msdn.microsoft.com/oldnewthing/20050418-59/?p=35873
https://blogs.msdn.microsoft.com/oldnewthing/20041025-00/?p=37483
|
|
|
|
|
| |
Maybe it partially helps with #2392 (on dual display setups). Either way, it
makes the code simpler.
|
|
|
|
|
|
|
|
|
|
|
| |
There were two mistakes:
- SDL's RGB565 is always equivalent to ffmpeg's RGB565 (both are packed 16bit
native-endian integers in RGB=565 form) - this was wrongly reversed on
big endian platforms.
- SDL's RGB888 doesn't actually mean RGB24, but XRGB8888 (i.e. 32bit
packed integer, top 8 bits unused).
- Use RGB0 not RGBA when there is no alpha.
|
|
|
|
| |
Avoids that some OpenGL implementation will pin it to 3.0.
|
|
|
|
|
|
| |
This is mainly for the nnedi3 user shader. With all whose NN weights
hardcoded into the shader source code, the shader file could be as
large as 300 kB.
|
|
|
|
| |
Not sure what/if I was thinking there.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although D3D11 video decoding is unuspported on Windows 7, the
associated APIs almost work. Where they fail is texture creation, where
we try to create D3D11_BIND_DECODER surfaces. So specifically try to
detect this situation.
One issue is that once the hwdec interop is created, the damage is done,
and it can't use another backend (because currently only 1 hwdec backend
is supported). So that's where we prevent attempts to use it.
It still can fail when trying to use d3d11va-copy (since that doesn't
require an interop backend), but at that point we don't care anymore -
dxva2(-copy) is tried before that anyway.
|
|
|
|
| |
Leftovers.
|
|
|
|
|
| |
Move unmap() to the top of the function, and replace some duplicated
code with a call to it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
User hooks can now use an extra WHEN expression to specify when the
shader should be run. For example, this can be used to only run a chroma
scaling shader `WHEN CHROMA.w LUMA.w <`.
There's a slight semantics change to user shaders: When trying to bind a
texture that does not exist, a shader will now be silently skipped
(similar to when the condition is false) instead of generating an error.
This allows shader stages to depend on an optional earlier stage without
having to copy/paste the same condition everywhere.
(In other words: there's an implicit condition on all of the bound
textures existing)
|
|
|
|
| |
WTF of the day.
|
|
|
|
|
|
|
|
|
|
| |
When using --hwdec=auto, systems that don't provide
D3D11_CREATE_DEVICE_VIDEO_SUPPORT, which probably includes all Windows
Vista and 7 systems, will print an error message. Reduce the log level
to verbose when probing and skip the error message entirely if d3d11.dll
is not present.
This commit is in a similar spirit to 991af7d.
|
|
|
|
|
| |
If ID3D11Device_QueryInterface fails, "multithread" will be set to NULL.
The _Release would just make it crash with a null pointer deref.
|
|
|
|
|
|
|
|
| |
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default behavior of vo_opengl has pretty much always been 'show the
source colors as-is, without caring to adapt it to the target device'.
This decision is mostly based on the fact that if we do anything else,
lots of people will complain.
With the rise of content like BT.2020, however, it turns out more people
complain about this content being very desaturated than people complain
about this content not matching VLC - so let's just map ultra-wide gamut
content back down to standard gamut by default.
|
|
|
|
|
|
| |
Instead of measuring the actual upload time, this instead measures the
time needed to render + map the texture via vdpau. These numbers are
still useful, since they're part of the critical path.
|
|
|
|
|
|
|
|
|
|
|
| |
This is plumbed through a new VOCTRL, VOCTRL_PERFORMANCE_DATA, and
exposed as properties render-time-last, render-time-avg etc.
All of these numbers are in microseconds, which gives a good precision
range when just outputting them via show-text. (Lua scripts can
obviously still do their own formatting etc.)
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To avoid blocking the CPU, we use 8 time objects and rotate through
them, only blocking until the last possible moment (before we need
access to them on the next iteration through the ring buffer). I tested
it out on my machine and 4 query objects were enough to guarantee
block-free querying, but the extra margin shouldn't hurt.
Frame render times are just output at the end of each frame, via MP_DBG.
This might be improved in the future. (In particular, I want to expose
these numbers as properties so that users get some more visible feedback
about render times)
Currently, we measure pass_render_frame and pass_draw_to_screen
separately because the former might be called multiple times due to
interpolation. Doing it this way gives more faithful numbers. Same goes
for frame upload times.
|
|
|
|
|
|
|
|
|
|
| |
When ANGLE is using D3D11 and not running in DirectComposition mode,
DXGI will hook the video window's message loop and override Alt+Enter to
trigger a transition to exclusive fullscreen mode (which doesn't even
work with mpv's renderer for some reason.) This behaviour can be
disabled by getting a pointer to the IDXGIFactory associated with the
D3D11 device and calling MakeWindowAssociation with the appropriate
flags.
|
|
|
|
|
|
|
|
|
|
| |
Instead of implicitly resetting the options to defaults and then
applying the options, they're always applied on top of the current
options (in the same way adding new options to the CLI command line
will).
This does not apply to vo_opengl_cb, because that has an even worse mess
which I refuse to deal with.
|
|
|
|
|
| |
In theory we could probably make icc-profile-auto a vo_opengl-specific
option, but I won't bother with this. Logging an error is simpler.
|
|
|
|
|
| |
Changing the options can enable icc-profile-auto, and in this case the
profile has to be reteieved again from the system.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable m_sub_options_copy() to copy nested sub-options, and also enable
it to create an option struct from defaults. We can get rid of most of
the crap in assign_options() now.
Calling handle_scaler_opt() to get a static allocation for scaler name
is still needed. It's moved to reinit_scaler(), which seems to be a
better place for it. Without it, dangling pointers could be created when
options are changed. (And in fact, this fixes possible dangling pointers
for window.name.) In theory we could create a dynamic copy, but that
seemed even more messy.
Chance of regressions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 026b75e7 actually enabled changing icc options at runtime (via
vo_cmdline), but it didn't quite work. In particular, changing the icc-
profile option just kept the old profile, because it was cached
accordingly.
As part of this, change gl_lcms.opts from a struct to a pointer to a
struct. We properly copy it, instead of allowing possibly dangling
strings, like it was done in a working but unclean way before.
Also, reinit the whole rendering chain when the auto icc profile
changes, just like it's done when icc options are changed.
|
|
|
|
|
|
| |
Passing the bstr thing as pointer makes no sense. Everywhere else bstr
structs are passed by value because they're so small. Only when it's
supposed to receive a return value they're not.
|
|
|
|
| |
It's never NULL.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally, video.c did not access any CMS things (other than lut3d
being set on it), but this has changed. In practice, almost all accesses
to it have moved to video.c. vo_opengl only created it, and set the auto
icc profile path.
Complete the move.
Some things wrt. option handling are a bit fishy. (But when is this not
the case.)
icc-profile-auto was not tested, but the distributed human CI will take
care of it.
|
|
|
|
|
| |
This was dumb. Also, lcms.h has actually no need to include video.h
besides this and csputils.h (makes it slightly less entangled).
|
|
|
|
| |
Well this was dumb.
|
|
|
|
|
|
|
|
| |
It gets printed on every alt+tab or desktop switch under mutter and
weston, and offers no useful information since it's handled by
destroying the previous entry.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit will cause the wayland backend and vo to correctly report
the display frame rate. This didn't work as VOCTRL_GET_DISPLAY_FPS was
received way too early, before the window was created (and thus
current_output set).
The VO will now signal VO_EVENT_WIN_STATE after window initialization
and upon a resize.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
|
|
|
|
|
|
|
|
|
| |
This algorithm works really well. Setting it is a much better
"out-of-the-box" experience than just clipping, which will always look
ugly.
In other words, with this default, users of mpv will just be able to
play HDR content without even realizing it's HDR (pretty much).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of doing HDR tone mapping on an ad-hoc basis inside
pass_colormanage, the reference peak of an image is now part of the
image params (alongside colorspace, gamma, etc.) and tone mapping is
done whenever peak_src != peak_dst.
To get sensible behavior when mixing HDR and SDR content and displays,
target-brightness is a generic filler for "the assumed brightness of SDR
content".
This gets rid of the weird display_scaled hack, sets the framework
for multiple HDR functions with difference reference peaks, and allows
us to (in a future commit) autodetect the right source peak from
the HDR metadata.
(Apart from metadata, the source peak can also be controlled via
vf_format. For HDR content this adjusts the overall image brightness,
for SDR content it's like simulating a different exposure)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The wayland protocol exposes scaling done by the compositor to
compensate for small window sizes on small high DPI displays. If the
program ignores the scaling done, what'll happen is the compositor is
going to ask the program to be scaled down by N times the window size and
then it'll upscale the program's surface by N times. The scaling
algorithm seems to be bilinear so the scaling is quite obvious.
This commit sets up callbacks to listen for the scaling factor of each
output and, on rescale events, notifies the compositor that the
surface's scale is what the compositor asked for and changes the
player's surface to the appropriate size, causing no scaling to be done
by the compositor.
Compositors not supporting this interface will ignore the callbacks and do
nothing, keeping program behaviour the same. For compositors supporting
and using this interface (mutter), this will fix the rendering to be pixel
precise as it should be.
Both the opengl wayland backend and the wayland vo have been fixed to support
this. Verified to not break either on weston and mutter.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
|
|
|
|
|
|
|
|
| |
Developed by John Hable for use in Uncharted 2. Also used by Frictional
Games in SOMA. Originally inspired by a filmic tone mapping algorithm
created by Kodak.
From http://frictionalgames.blogspot.de/2012/09/tech-feature-hdr-lightning.html
|
|
|
|
|
| |
This is the canonical name for the algorithm. I simply didn't know it
before.
|
|
|
|
|
|
| |
Position the window around the original window center on video size change
(when switching to the next file with different resolution, for example)
instead of keeping the position of its top-left corner fixed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now have a video filter that uses the d3d11 video processor, so it
makes no sense to have one in the VO interop code. The VO uses it for
formats not directly supported by ANGLE (so the video data is converted
to a RGB texture, which ANGLE can take in).
Change this so that the video filter is automatically inserted if
needed. Move the code that maps RGB surfaces to its own inteorp backend.
Add a bunch of new image formats, which are used to enforce the new
constraints, and to automatically insert the filter only when needed.
The added vf mechanism to auto-insert the d3d11vpp filter is very dumb
and primitive, and will work only for this specific purpose. The format
negotiation mechanism in the filter chain is generally not very pretty,
and mostly broken as well. (libavfilter has a different mechanism, and
these mechanisms don't match well, so vf_lavfi uses some sort of hack.
It only works because hwaccel and non-hwaccel formats are strictly
separated.)
The RGB interop is now only used with older ANGLE versions. The only
reason I'm keeping it is because it's relatively isolated (uses only
existing mechanisms and adds no new concepts), and because I want to be
able to compare the behavior of the old code with the new one for
testing. It will be removed eventually.
If ANGLE has NV12 interop, P010 is now handled by converting to NV12
with the video processor, instead of converting it to RGB and using the
old mechanism to import that as a texture.
|
|
|
|
|
|
| |
This avoids a copy of the video image and lowers vsync jitter. Since
there are now two options to add to the window_attribs list, it has been
made dynamic.
|
|
|
|
|
|
|
|
|
|
| |
A lot of real-world shaders start off with comments explaining the usage
or license, generating lots of "empty" passes. This simply change allows
us to skip them, which silences the warning spam and prevents us from
having to store and copy around these empty passes.
It also adds a more useful failure check: Attempting to use a user
shader that doesn't define any passes at all.
|
|
|
|
|
|
| |
This requires the GL_EXT_texture_norm16 extension and works in ANGLE.
A default precision had to be set for sampler3Ds, otherwise the shaders
would fail to compile.
|