| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
| |
Mostly a cosmetic change. Handling bultin and extension functions
separaterly makes more sense here too.
|
|
|
|
|
| |
Print to the user the name of the scaler that gets disabled rather than
just printing its number.
|
|
|
|
|
|
|
|
|
|
| |
See previous commit. (The mixing case was never supported, so this has
equivalent functionality.)
This also implicitly fixes a bug: the old code allocated image data for
the cropped surface size only, which means the
video_surface_get_bits_y_cb_cr call could write beyond the allocated
image memory.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For now, this affects taking screenshots only.
If the video mixer is actually needed, we want to go through the video
mixer for screenshots too - which is why this function simply always
used the video mixer, and then copied the surface to RAM as RGBA.
Add reading the surface as nv12 if possible. If the format is correct,
and no special video processing (like deinterlacing) is used, then it's
read directly without going through the video mixer.
There's no particular reason for doing this, other than avoiding the
video mixer (like vo_opengl can do it now). Also, the next commit will
make use of this in vf_vdpaurb.c.
|
|
|
|
|
| |
GLES doesn't have this constant. It's not used on GLES.
I'm starting to think we need some better way to do this.
|
|
|
|
|
|
|
|
| |
The GL_ARB_timer_query extension and thus the GL_TIME_ELAPSED constant
don't exist for GLES.
For ES the EXT_disjoint_timer_query is used so take the constant from
that else provide the constant manually.
See pr #3216 which introduced this error.
|
|
|
|
|
| |
The hw_subfmt field remained set, while it has to be unset for non-hwdec
formats.
|
|
|
|
|
| |
Instead of keeping them for a while. While keeping the mapping was
perfectly ok, nothing speaks against reducing the time they're mapped.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we've always converted vdpau video surfaces to RGB, and then
mapped the resulting RGB texture. Change this so that the surface is
mapped as NV12 plane textures.
The reason this wasn't done until now is because vdpau surfaces are
mapped in an "interlaced" way as separate fields, even for progressive
video. This requires messy reinterleraving. It turns out that even
though it's an extra processing step, the result can be faster than
going through the video mixer for RGB conversion.
Other than some potential speed-gain, doing this has multiple other
advantages. We can apply our own color conversion, which is important in
more complex cases. We can correctly apply debanding and potentially
other processing that requires chroma-specific or in-YUV handling.
If deinterlacing is enabled, this switches back to the old RGB
conversion method. Until we have at least a primitive deinterlacer in
vo_opengl, this will stay this way. The d3d11 and vaapi code paths are
similar. (Of course these don't require any crazy field reinterleaving.)
|
|
|
|
|
|
|
|
| |
Otherwise stale references will survive forever. Could leak hardware
video surfaces.
In particular, the mpv vdpau code crashed with an assertion when exiting
after toggling deinterlacing, because not all references were released.
|
|
|
|
|
| |
And remove the strange PACKER_MAX_WH define. This is more convenient for
users which don't care about limits, such as sd_lavc.c.
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. this basically reverts commit de4c74e5a4a996e8ff431c8f33a32c4b580be203.
even with CVDisplayLinkCreateWithActiveCGDisplays and
CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext we still have to
explicitly set the current display ID, otherwise it will just always
choose the display with the lowest refresh rate. another weird thing is,
we still have to set the display ID another time with
CVDisplayLinkSetCurrentCGDisplay after the link was started. otherwise
the display period is 0 and the fallback will be used.
if we ever use the callback method for something useful it's probably
better to use CVDisplayLinkCreateWithActiveCGDisplays since we will need
to keep the display link around instead of releasing it at the end.
in that case we have to call CVDisplayLinkSetCurrentCGDisplay two times,
once before and once after LinkStart.
2. add windowDidChangeScreen delegate to update the display refresh rate
when mpv is moved to a different screen.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have two problems here.
1. CVDisplayLinkGetActualOutputVideoRefreshPeriod, like the name suggests,
returns a frame period and not a refresh rate. using this as screen_fps
just leads to a slideshow. why didn't this break video playback on OS X
completely? the answer to this leads us to the second problem.
2. it seems that CVDisplayLinkGetActualOutputVideoRefreshPeriod always
returns 0 if used without CVDisplayLinkSetOutputCallback and hence always
fell back to CVDisplayLinkGetNominalOutputVideoRefreshPeriod. adding a
callback to CVDisplayLink solves this problem. the callback function at
this moment doesn't do anything but could possibly used in the future.
|
|
|
|
|
|
|
| |
This can also remove all the stuff for lazily attaching the texture. It
doesn't matter if the dxinterop backend changes the bound framebuffer
during a VOCTRL, since the renderer does not rely on the GL state being
preserved.
|
|
|
|
|
|
|
| |
Most of the functionality already exists for the sake of vo_opengl_cb.
We only have to use it.
This will be used by dxinterop in the following commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous few commits changed sd_lavc.c's output to packed RGB sub-
images. In particular, this means all sub-bitmaps are part of a larger,
single bitmap. Change the vo_opengl OSD code such that it can make use
of this, and upload the pre-packed image, instead of packing and copying
them again.
This complicates the upload code a bit (4 code paths due to messy PBO
handling). The plan is to make sub-bitmaps always packed, but some more
work is required to reach this point. The plan is to pack libass images
as well. Since this implies a copy, this will make it easy to refcount
the result.
(This is all targeted towards vo_opengl. Other VOs, vo_xv, vo_x11, and
vo_wayland in particular, will become less efficient. Although at least
vo_vdpau and vo_direct3d could be switched to the new method as well.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, subtitle renderers could export SUBBITMAP_INDEXED, which is a
8 bit per pixel with palette format. sd_lavc.c was the only renderer
doing this, and the result was converted to RGBA in every use-case
(except maybe when the subtitles were hidden.)
Change it so that sd_lavc.c converts to RGBA on its own. This simplifies
everything a bit, and the palette handling can be removed from the
common code.
This is also preparation for making subtitle images refcounted. The
"caching" in img_convert.c is a PITA in this respect, and needs to be
redone. So getting rid of some img_convert.c code is a positive side-
effect. Also related to refcounted subtitles is packing them into a
single mp_image. Fewer objects to refcount is easier, and for the libass
format the same will be done. The plan is to remove manual packing from
the VOs which need single images entirely.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply the padding internally to each input bitmap, instead of requiring
this for the semi-public API.
Right now, everything still uses packer_pack_from_subbitmaps() to fill
the input bitmap sizes, but that's going to change with the following
commit. Since bitmap_packer.in is mutated during packing anyway, it's
more convenient to add the padding automatically.
Also, guarantee that every sub-bitmap has a padding border around it.
Don't let the padding overlap. Add padding even on the containing
borders.
This is simpler, doesn't cost much in memory usage, and is convenient
for one of the following commits.
|
|
|
|
| |
This is now stored within the AVFrame/mp_image.
|
|
|
|
| |
Reduces interference with pass-through case, where this is not needed.
|
|
|
|
| |
Instead, warn.
|
| |
|
|
|
|
|
|
| |
This is the ES equivalent to ARB_timer_query. It enables the performance
timers on ANGLE. All the added functions should be identical in
semantics to their desktop GL equivalents.
|
|
|
|
|
|
|
|
|
| |
The OpenGL 3.0+ and ES specs are quite clear on what values are
accepted for the attachment object name parameter. And there's no
overlap for the default framebuffer. Sigh.
Probably fixes Mesa raising an error in this case and might fix #3251.
Regression by the previous vo_opengl change.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we've used system-specific API (GLX, EGL, etc.) to retrieve
the depth of the default framebuffer. (We equal this to display depth
and use the determined depth for dithering.)
We can actually retrieve this value through standard GL API, and it
works everywhere (except GLES 2 of course). This simplifies everything a
great deal.
egl_helpers.c is empty now. But I expect that some EGL boilerplate will
be moved to it, so don't remove it yet.
|
|
|
|
|
|
|
|
| |
This was somehow done under the assumption that ANGLE would somehow
always use RG in ES2 mode. But there's no basis for this. Even if ANGLE
supports NV12 textures with drivers that do not allow for texture_rg,
this cas eis too obscure to worry about. So do the robust and correct
thing instead, and disable this code if texture_rg is not available.
|
|
|
|
|
|
|
| |
Commit 74e3d11 resulted in the background overlay not getting destroyed
when mpv quits. Add back a piece of code that was removed in that commit
to restore correct functionality.
Fixes issue #3100
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a common idiom used in MSDN docs and Raymond Chen's example
programs to get a HINSTANCE for the current module, regardless of
whether it's an .exe or a .dll. Using GetModuleHandle(NULL) for this is
technically incorrect, since it always gets a handle to the .exe, even
when the executing code (in libmpv) is running in a .dll. In this case,
using the wrong HINSTANCE could cause namespace issues with window
classes, since CreateWindowEx uses the HINSTANCE to search for the
matching window class name.
See:
https://blogs.msdn.microsoft.com/oldnewthing/20050418-59/?p=35873
https://blogs.msdn.microsoft.com/oldnewthing/20041025-00/?p=37483
|
|
|
|
|
| |
Maybe it partially helps with #2392 (on dual display setups). Either way, it
makes the code simpler.
|
|
|
|
|
|
|
|
|
|
|
| |
There were two mistakes:
- SDL's RGB565 is always equivalent to ffmpeg's RGB565 (both are packed 16bit
native-endian integers in RGB=565 form) - this was wrongly reversed on
big endian platforms.
- SDL's RGB888 doesn't actually mean RGB24, but XRGB8888 (i.e. 32bit
packed integer, top 8 bits unused).
- Use RGB0 not RGBA when there is no alpha.
|
|
|
|
| |
Avoids that some OpenGL implementation will pin it to 3.0.
|
|
|
|
|
|
| |
This is mainly for the nnedi3 user shader. With all whose NN weights
hardcoded into the shader source code, the shader file could be as
large as 300 kB.
|
|
|
|
| |
Not sure what/if I was thinking there.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although D3D11 video decoding is unuspported on Windows 7, the
associated APIs almost work. Where they fail is texture creation, where
we try to create D3D11_BIND_DECODER surfaces. So specifically try to
detect this situation.
One issue is that once the hwdec interop is created, the damage is done,
and it can't use another backend (because currently only 1 hwdec backend
is supported). So that's where we prevent attempts to use it.
It still can fail when trying to use d3d11va-copy (since that doesn't
require an interop backend), but at that point we don't care anymore -
dxva2(-copy) is tried before that anyway.
|
|
|
|
| |
Leftovers.
|
|
|
|
|
| |
Move unmap() to the top of the function, and replace some duplicated
code with a call to it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
User hooks can now use an extra WHEN expression to specify when the
shader should be run. For example, this can be used to only run a chroma
scaling shader `WHEN CHROMA.w LUMA.w <`.
There's a slight semantics change to user shaders: When trying to bind a
texture that does not exist, a shader will now be silently skipped
(similar to when the condition is false) instead of generating an error.
This allows shader stages to depend on an optional earlier stage without
having to copy/paste the same condition everywhere.
(In other words: there's an implicit condition on all of the bound
textures existing)
|
|
|
|
| |
WTF of the day.
|
|
|
|
|
|
|
|
|
|
| |
When using --hwdec=auto, systems that don't provide
D3D11_CREATE_DEVICE_VIDEO_SUPPORT, which probably includes all Windows
Vista and 7 systems, will print an error message. Reduce the log level
to verbose when probing and skip the error message entirely if d3d11.dll
is not present.
This commit is in a similar spirit to 991af7d.
|
|
|
|
|
| |
If ID3D11Device_QueryInterface fails, "multithread" will be set to NULL.
The _Release would just make it crash with a null pointer deref.
|
|
|
|
|
|
|
|
| |
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default behavior of vo_opengl has pretty much always been 'show the
source colors as-is, without caring to adapt it to the target device'.
This decision is mostly based on the fact that if we do anything else,
lots of people will complain.
With the rise of content like BT.2020, however, it turns out more people
complain about this content being very desaturated than people complain
about this content not matching VLC - so let's just map ultra-wide gamut
content back down to standard gamut by default.
|
|
|
|
|
|
| |
Instead of measuring the actual upload time, this instead measures the
time needed to render + map the texture via vdpau. These numbers are
still useful, since they're part of the critical path.
|
|
|
|
|
|
|
|
|
|
|
| |
This is plumbed through a new VOCTRL, VOCTRL_PERFORMANCE_DATA, and
exposed as properties render-time-last, render-time-avg etc.
All of these numbers are in microseconds, which gives a good precision
range when just outputting them via show-text. (Lua scripts can
obviously still do their own formatting etc.)
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To avoid blocking the CPU, we use 8 time objects and rotate through
them, only blocking until the last possible moment (before we need
access to them on the next iteration through the ring buffer). I tested
it out on my machine and 4 query objects were enough to guarantee
block-free querying, but the extra margin shouldn't hurt.
Frame render times are just output at the end of each frame, via MP_DBG.
This might be improved in the future. (In particular, I want to expose
these numbers as properties so that users get some more visible feedback
about render times)
Currently, we measure pass_render_frame and pass_draw_to_screen
separately because the former might be called multiple times due to
interpolation. Doing it this way gives more faithful numbers. Same goes
for frame upload times.
|
|
|
|
|
| |
This was made unnecessary by a02d77b, since the only way the function
could fail was by failing to add a reference to the DirectX DLLs.
|