| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we know in advance whether an implementation should support a
specific format, we have more flexibility when determining which format
to use.
In particular, we can drop the roundabout ES logic.
I'm not sure if actually trying to create the FBO for probing still has
any value. But it might, so leave it for now.
|
|
|
|
|
|
|
|
| |
Even if everything else is available, the need for first class arrays
breaks it. In theory we could fix this since we don't strictly need
them, but I guess it's not worth bothering.
Also give the misnamed have_mix variable a slightly better name.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This merges all knowledge about texture format into a central table.
Most of the work done here is actually identifying which formats exactly
are supported by OpenGL(ES) under which circumstances, and keeping this
information in the format table in a somewhat declarative way. (Although
only to the extend needed by mpv.) In particular, ES and float formats
are a horrible mess.
Again this is a big refactor that might cause regression on "obscure"
configurations.
|
|
|
|
| |
It'll fail with an assertion in the interpolation code otherwise.
|
|
|
|
| |
Helpful for debugging and such.
|
|
|
|
|
| |
Recent regression. Caused it to use dumb-mode with integer textures,
which on ANGLE leads to nearest scaling.
|
| |
|
|
|
|
|
|
|
|
| |
This uses the normal autoprobing rules like "auto", but rejects anything
that isn't flagged as copying data back to system memory.
The chunk in command.c was dead code, so remove it instead of updating
it.
|
|
|
|
|
|
|
|
|
| |
MSDN documents this as "Introduced in Windows 8.1.". I assume on Windows
7 this field will simply be ignored. Too bad for Windows 7 users.
Also, I'm not using D3D11_VIDEO_PROCESSOR_NOMINAL_RANGE_16_235 and
D3D11_VIDEO_PROCESSOR_NOMINAL_RANGE_0_255, because these are apparently
completely missing from the MinGW headers. (Such a damn pain.)
|
|
|
|
|
|
|
| |
Older ANGLE builds don't export this.
This change is really only for convenience, and I might revert it at
some later point.
|
|
|
|
|
|
| |
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
|
|
|
|
|
|
|
|
|
|
|
|
| |
ANGLE is _really_ annoying to build. (Requires special toolchain and a
recent MSVC version.) This results in various issues with people
having trouble to build mpv against ANGLE (apparently linking it
against a prebuilt binary doesn't count, or using binaries from
potentially untrusted sources is not wanted).
Dynamically loading ANGLE is going to be a huge convenience. This commit
implements this, with special focus on keeping it source compatible to
a normal build with ANGLE linked at build-time.
|
|
|
|
|
|
|
|
|
|
| |
In theory this was needed for the previous commit (but wasn't in
practice, since for hwdec the LUMINANCE_ALPHA mangling is not applied
anymore, and ANGLE uses RG textures in absence of GL_ARB_texture_rg for
whatever crazy reasons).
In practice this caused funky colors on OSX with the uyvy422 format,
which is also fixed in this commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses EGL_ANGLE_stream_producer_d3d_texture_nv12 and related
extensions to map the D3D textures coming from the hardware decoder
directly in GL.
In theory this would be trivial to achieve, but unfortunately ANGLE does
not have a mechanism to "import" D3D textures as GL textures. Instead,
an awkward mechanism via EGL_KHR_stream was implemented, which involves
at least 5 extensions and a lot of glue code. (Even worse than VAAPI EGL
interop, and very far from the simplicity you get on OSX.)
The ANGLE mechanism so far supports only the NV12 texture format, which
means 10 bit won't work. It also does not work in ES3 mode yet. For
these reasons, the "old" ID3D11VideoProcessor code is kept and used as a
fallback.
|
|
|
|
|
|
| |
It forces es2 mode on ANGLE. Only useful for testing. Since the normal
"angle" backend already falls back to es2 if es3 does not work, this new
backend always exit when autoprobing it.
|
|
|
|
|
| |
"p" is used for the private context everywhere in the source file, but
renaming it also requires renaming some local variables.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rename gl_hwdec_driver.map_image to map_frame, and let it fill out a
struct gl_hwdec_frame describing the exact texture layout. This gives
more flexibility to what the hwdec interop can export. In particular, it
can export strange component orders/permutations and textures with
padded size. (The latter originating from cropped video.)
The way gl_hwdec_frame works is in the spirit of the rest of the
vo_opengl video processing code, which tends to put as much information
in immediate state (as part of the dataflow), instead of declaring it
globally. To some degree this duplicates the texplane and img_tex
structs, but until we somehow unify those, it's better to give the hwdec
state its own struct. The fact that changing the hwdec struct would
require changes and testing on at least 4 platform/GPU combinations
makes duplicating it almost a requirement to avoid pain later.
Make gl_hwdec_driver.reinit set the new image format and remove the
gl_hwdec.converted_imgfmt field.
Likewise, gl_hwdec.gl_texture_target is replaced with
gl_hwdec_plane.gl_target.
Split out a init_image_desc function from init_format. The latter is not
called in the hwdec case at all anymore. Setting up most of struct
texplane is also completely separate in the hwdec and normal cases.
video.c does not check whether the hwdec "mapped" image format is
supported. This should not really happen anyway, and if it does, the
hwdec interop backend must fail at creation time, so this is not an
issue.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
|
|
|
|
|
| |
This was never really used anyway. Removing it for the sake of the
following commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we receive the wl_shell_surface::configure event, it makes sense
to respect the aspect ratio of the video in windowed mode, but in
fullscreen it forces compositing and wastes resources (until atomic
modesetting is available everywhere and we can stop having
desynchronised planes).
Weston mitigates a resolution mismatch by creating black surfaces and
compositing them around the fullscreen surface, placed at the middle,
while GNOME puts it at the top-left and leaves the rest of the desktop
composited below, both of them producing a subpar experience.
Fixes #3021, #2657.
|
|
|
|
|
|
|
|
|
|
|
| |
Add --taskbar-progress command line option and property which controls taskbar
progress indication rendering in Windows 7+. This option is on by default and
can be toggled during playback.
This option does not affect the creation process of ITaskbarList3. When the
option is turned off the progress bar is just hidden with TBPF_NOPROGRESS.
Closes #2535
|
|
|
|
|
|
|
|
|
|
|
| |
The X11 error handler is global, and not per-display. If another Xlib
user exists in the process, they can conflict. In theory, it might
happen that e.g. another library sets an error handler (overwriting the
mpv one), and some time after mpv closes its display, restores the error
handler to mpv's one. To mitigate this, check if the error log instance
is actually set, instead of possibly crashing.
The change in vo_x11_uninit() is mostly cosmetic.
|
|
|
|
| |
Also add missing documentation for fs-only, and correct the default.
|
|
|
|
|
|
|
| |
In order to honor the differences between OpenGL and Direct3D coordinate
systems, ANGLE uses a full FBO copy merely to flip the final frame
vertically. This can be avoided with the EGL_ANGLE_surface_orientation
extension.
|
|
|
|
|
|
|
|
|
|
|
|
| |
I hope that this does what we expect it does: destroy the EGLDisplay
specific to our HDC. (Some implementations will terminate all EGL
contexts in the whole process.)
eglReleaseThread() merely calls eglMakeCurrent(0, 0, 0, 0), which is
not enough.
This commit also fixes the problem fixed with the previous commit,
but I think both changes are needed to make our API usage clean.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If ANGLE was probed before (but rejected), the ANGLE API can remain
"initialized", and eglGetCurrentDisplay() will return a non-NULL
EGLDisplay. Then if a native GL context is used, the ANGLE/EGL API will
then (apparently) keep working alongside native OpenGL API. Since GL
objects are just numbers, they'll simply fail to interact, and OpenGL
will get invalid textures. For some reason this will result in black
textures.
With VAAPI-EGL, something similar could happen in theory, but didn't in
practice.
|
|
|
|
|
|
|
|
| |
Introduce hwdec-current and hwdec-interop properties.
Deprecate hwdec-detected, which never made a lot of sense, and which is
replaced by the new properties. hwdec-active also becomes useless, as
hwdec-current is a superset, so it's deprecated too (for now).
|
|
|
|
|
|
|
|
| |
Cache misses are a normal and expected part of the operation of a cache.
It doesn't really make sense to show a user-visible warning for them.
To work-around this, just skip trying to open the cache if it doesn't
exist yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First of all, black point compensation is now on by default. This is
really rather harmless and only improves the result (where "improvement"
means "less black clipping").
Second, this adds an option to limit the ICC profile's contrast, which
helps for untagged matrix profiles that are implicitly black scaled even
in colorimetric intent. (Note that this relies on BPC being enabled to
work properly, which is why the two changes are tied together)
Third, this uses the LittleCMS built in black point estimator instead of
relying on the presence of accurate A2B tables. This also checks tags
and does some amounts of noise elimination.
If the option is unspecified and the profile is missing black point
information, print a warning instructing the user to set the option, and
fall back to 1000 otherwise.
|
|
|
|
| |
Fixes hardware decoding of most mpeg2 things.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The vdpau_mixer could fail to be recreated properly if preemption
occured at some point before playback initialization (like when using
--hwdec-preload and the opengl-cb API).
Normally, the vdpau_mixer was supposed to be marked invalid when the
components using it detect a preemption, e.g. in hwdec_vdpau.c. This one
didn't mark the vdpau_mixer as invalid if preemption was detected in
reinit(), only in map_image().
It's cleaner to detect preemption directly in the vdpau_mixer, which
ensures it's always recreated correctly.
|
|
|
|
|
|
|
|
|
|
| |
The "fs-only" choice sets the _NET_WM_BYPASS_COMPOSITOR to 1 if the
window is fullscreened, and 0 otherwise. (0 is specified to be the
implicit default - i.e. no change is requested in windowed mode.)
In particular, change the default to "fs-only".
Fixes #2582.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes #3097
|
|
|
|
|
|
| |
Should fix #3076 (partially).
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
| |
Fit whole window or just a client area in accord with value of --fit-border option.
Fixes #2935.
|
|
|
|
|
|
|
| |
Substraction of 1 is not necessary due to .right and .bottom values of RECT
struct describing a point one pixel outside of the rectangle.
Fixes #2935. (2.b)
|
|
|
|
| |
fixes #3092
|
|
|
|
| |
Slight simplification, IMHO.
|
|
|
|
|
|
|
|
| |
In particular, this moves the depth test to common code.
Should be functionally equivalent, except that for DXVA2, the
IDirectXVideoDecoderService_GetDecoderRenderTargets API is called
more often potentially.
|
|
|
|
| |
Gets rid of some silliness, and might be useful in the future.
|
|
|
|
| |
Legacy desktop GL only symbols. Broken by the previous commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gives us 16 bit fixed-point integer texture formats, including
ability to sample from them with linear filtering, and using them as FBO
attachments.
The integer texture format path is still there for the sake of ANGLE,
which does not support GL_EXT_texture_norm16 yet.
The change to pass_dither() is needed, because the code path using
GL_R16 for the dither texture relies on glTexImage2D being able to
convert from GL_FLOAT to GL_R16. GLES does not allow this. This could be
trivially fixed by doing the conversion ourselves, but I'm too lazy to
do this now.
|
|
|
|
|
| |
This shouldn't make much of a difference, but should make the following
commit simpler.
|
|
|
|
|
| |
This should be ok. eglBindTexImage() just associates the texture, and
does not make a copy (not even a conceptual one).
|
|
|
|
|
| |
Not much we can do about. If there are many complaints, a mechanism to
automatically disable interop in such cases could be added.
|
|
|
|
|
|
|
|
|
| |
Basically this gets rid of the need for the accessors in d3d11va.h, and
the code can be cleaned up a little bit.
Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer
in the last plane pointers, but it tolerates/passes through the other
plane pointers we set.
|
|
|
|
|
|
| |
We want to prefer d3d11va over dxva2 anything. But since dxva2 copyback
is more efficient than d3d11va's currently, d3d11va-copy should come
last.
|