aboutsummaryrefslogtreecommitdiffhomepage
path: root/video/out/opengl/hwdec_vaegl.c
Commit message (Collapse)AuthorAge
* vo_opengl: hwdec: reset hw_subfmt fieldGravatar wm42016-07-15
| | | | | | In theory, mp_image_params with hw_subfmt set to non-0 if imgfmt is not a hwaccel format is invalid. (It worked fine because nothing checks this yet.)
* video: change hw_subfmt meaningGravatar wm42016-07-15
| | | | | | | | | | | | | | | | | | The hw_subfmt field roughly corresponds to the field AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type AVPixelFormat (instead of the underlying hardware format), so it's a good idea to switch to this too for preparation. Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API- specific number. VDPAU and Direct3D11 already used mp_imgfmt, but Videotoolbox and VAAPI had to be switched. One somewhat user-visible change is that the verbose log will now always show the hw_subfmt as image format, instead of as nonsensical number. (In the end it would be good if we could switch to AVHWFramesContext completely, but the upstream API is incomplete and doesn't cover Direct3D11 and Videotoolbox.)
* vo_opengl: refactor how hwdec interop exports texturesGravatar wm42016-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename gl_hwdec_driver.map_image to map_frame, and let it fill out a struct gl_hwdec_frame describing the exact texture layout. This gives more flexibility to what the hwdec interop can export. In particular, it can export strange component orders/permutations and textures with padded size. (The latter originating from cropped video.) The way gl_hwdec_frame works is in the spirit of the rest of the vo_opengl video processing code, which tends to put as much information in immediate state (as part of the dataflow), instead of declaring it globally. To some degree this duplicates the texplane and img_tex structs, but until we somehow unify those, it's better to give the hwdec state its own struct. The fact that changing the hwdec struct would require changes and testing on at least 4 platform/GPU combinations makes duplicating it almost a requirement to avoid pain later. Make gl_hwdec_driver.reinit set the new image format and remove the gl_hwdec.converted_imgfmt field. Likewise, gl_hwdec.gl_texture_target is replaced with gl_hwdec_plane.gl_target. Split out a init_image_desc function from init_format. The latter is not called in the hwdec case at all anymore. Setting up most of struct texplane is also completely separate in the hwdec and normal cases. video.c does not check whether the hwdec "mapped" image format is supported. This should not really happen anyway, and if it does, the hwdec interop backend must fail at creation time, so this is not an issue.
* video: refactor how VO exports hwdec device handlesGravatar wm42016-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or documented where not), which makes the whole thing saner and cleaner. In particular, thread-safety rules become less subtle and more obvious. The new internal API makes it easier to support multiple OpenGL interop backends. (Although this is not done yet, and it's not clear whether it ever will.) This also removes all the API-specific fields from mp_hwdec_ctx and replaces them with a "ctx" field. For d3d in particular, we drop the mp_d3d_ctx struct completely, and pass the interfaces directly. Remove the emulation checks from vaapi.c and vdpau.c; they are pointless, and the checks that matter are done on the VO layer. The d3d hardware decoders might slightly change behavior: dxva2-copy will not use the VO device anymore if the VO supports proper interop. This pretty much assumes that any in such cases the VO will not use any form of exclusive mode, which makes using the VO device in copy mode unnecessary. This is a big refactor. Some things may be untested and could be broken.
* vo_opengl: EGL: fix hwdec probingGravatar wm42016-05-05
| | | | | | | | | | | | | If ANGLE was probed before (but rejected), the ANGLE API can remain "initialized", and eglGetCurrentDisplay() will return a non-NULL EGLDisplay. Then if a native GL context is used, the ANGLE/EGL API will then (apparently) keep working alongside native OpenGL API. Since GL objects are just numbers, they'll simply fail to interact, and OpenGL will get invalid textures. For some reason this will result in black textures. With VAAPI-EGL, something similar could happen in theory, but didn't in practice.
* vaapi: determine surface format in decoder, not in rendererGravatar wm42016-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, we have made the assumption that a driver will use only 1 hardware surface format. the format is dictated by the driver (you don't create surfaces with a specific format - you just pass a rt_format and get a surface that will be in a specific driver-chosen format). In particular, the renderer created a dummy surface to probe the format, and hoped the decoder would produce the same format. Due to a driver bug this required a workaround to actually get the same format as the driver did. Change this so that the format is determined in the decoder. The format is then passed down as hw_subfmt, which allows the renderer to configure itself with the correct format. If the hardware surface changes its format midstream, the renderer can be reconfigured using the normal mechanisms. This calls va_surface_init_subformat() each time after the decoder returns a surface. Since libavcodec/AVFrame has no concept of sub- formats, this is unavoidable. It creates and destroys a derived VAImage, but this shouldn't have any bad performance effects (at least I didn't notice any measurable effects). Note that vaDeriveImage() failures are silently ignored as some drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL interop. In addition, we still probe whether we can map an image in the EGL interop code. This is important as it's the only way to determine whether EGL interop is supported at all. With respect to the driver bug mentioned above, it doesn't matter which format the test surface has. In vf_vavpp, also remove the rt_format guessing business. I think the existing logic was a bit meaningless anyway. It's not even a given that vavpp produces the same rt_format for output.
* vo_opengl: hwdec: use IDs for API, and log which backend is usedGravatar wm42016-02-01
| | | | | | | Since there can be multiple backends for a single API (vaapi can use GLX or EGL), not logging the exact backend name is annoying. So add it. At the same time, there is no need to duplicate the name as used by the --hwdec options, so replace it with using the numeric hwdec API ID.
* vo_opengl: vaapi: don't expect EGL exts. to be in common ext. stringGravatar wm42016-01-22
|
* vo_opengl: vaapi: reorganize platform entrypoints as tableGravatar wm42016-01-21
|
* vo_opengl: add KMS/DRM VAAPI hardware decoding interopGravatar wm42016-01-20
| | | | Just requires glueing it together with Bloat Super Glue (tm).
* vaapi: replace VA_STR_FOURCCGravatar wm42016-01-11
|
* vo_opengl: hwdec_vaegl: change license to LGPL 2.1Gravatar wm42016-01-07
| | | | | | | | | This file claims to be based on the "MPlayer VA-API patch", but this is untrue. Only some glue code was copied from hwdec_vaglx.c, and this glue code was never in MPlayer or the MPlayer VA-API patch in any form, and instead part of the mpv-original way we do hardware decoding OpenGL interop. The EGL interop method didn't exist at the time the MPlayer VA-API patch was created either.
* vo_opengl: never load vaapi GLX interop by defaultGravatar wm42015-11-09
| | | | | | | Causes more harm than it helps. Will eventually be removed. Also rename the "reject_emulated" field to "probing" - this is more appropriate now.
* vo_opengl: vaapi: fix compilation failure on older systemsGravatar wm42015-10-23
| | | | | | | Older systems have certain EGL extension definitions missing. We redefine them to make the build system easier, and because it's trivial. But we forgot to define the EGL_LINUX_DMA_BUF_EXT identifier. (I hope it's the only missing one.)
* vo_opengl: remove leftover variable from vaglx in vaeglGravatar Emmanuel Gil Peyrot2015-10-02
| | | | This was preventing compilation on systems without X11 headers.
* vo_opengl: vaapi: add Wayland supportGravatar wm42015-09-27
| | | | | | Pretty trivial with the new EGL interop. Fixes #478.
* vaapi: remove dependency on X11Gravatar wm42015-09-27
| | | | | | | | | | | | | There are at least 2 ways of using VAAPI without X11 (Wayland, DRM). Remove the X11 requirement from the decoder part and the EGL interop. This will be used by a following commit, which adds Wayland support. The worst about this is the decoder part, which includes a bad hack for using the decoder without any VO interop (also known as "vaapi-copy" mode). Separate the X11 parts so that they're self-contained. For the EGL interop code we do something similar (it's kept slightly simpler, because it essentially only has to translate between our silly MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
* vo_opengl: vaapi: provide symbols for missing extensionsGravatar wm42015-09-27
| | | | | We also could just check at build time, but since it's not much, just redefine them inline if not present.
* vo_opengl: vaapi: redo how EGL extensions are loadedGravatar wm42015-09-27
| | | | | | | It looks like my hope that we can unconditionally include EGL headers in the OpenGL code is not coming true, because OSX does not support EGL at all. So I prefer loading the VAAPI EGL/GL specific extensions manually, because it's less of a mess. Partially reverts commit d47dff3f.
* vo_opengl: vaapi: probe the surface formatGravatar wm42015-09-26
| | | | | | | | | | | | | | | | Probe the surface format, and check whether it's really something we support. This also does a complete check whether the EGL interop works at all (the only way to find this out is actually running this code). Also, support YV12. Under some circumstances, vaapi (with Intel drivers) can be made to use this format. Unfortunately, the Intel drivers show some very weird behavior, which is hopefully a bug. insane_hack() provides a very evil workaround (see comments). A proper solution might be passing the hw format as part of mp_image_params, but as long as hw surfaces appear to be able to change the format on the fly, attempting this is probably not worth the extra complexity and likely fragility. The hack allows us to pretend that there is sane behavior for now.
* vo_opengl: vaapi: undo vaAcquireBufferHandle() correctly on errorGravatar wm42015-09-25
| | | | | | | Checking and resetting the VAImage.buf field is non-sense, even if it happened to work out in the normal case. buf is actually freed when vaDestroyImage() is called (not quite intuitive), and we need an extra field to know whether vaReleaseBufferHandle() has to be called.
* vo_opengl: vaapi: handle YV12 correctlyGravatar wm42015-09-25
| | | | This specific FourCC has its planes swapped compared to FFmpeg yuv420p.
* vo_opengl: vaapi: document DRM fourcc upstream definesGravatar wm42015-09-25
| | | | | | | | | | | Add the upstream symbolic names as comments. Normally, these should be defined in libdrm's drm_fourcc.h header. But DRM_FORMAT_R8 and DRM_FORMAT_GR88 are not defined anywhere, except in the kernel userland headers of Linux 4.3 (!). We don't want mpv to depend on bleeding-edge Linux kernel headers, so this will have to do. Also, just for completeness, add fourccs for the 3 and 4 channel formats. I didn't manage to test them, though.
* vo_opengl: vaapi: use dummy image to determine plane layoutGravatar wm42015-09-25
| | | | | | | Reduces the amount of hardcoded assumptions about the layout drastically. (Now adding yuv420 support would be just adjusting an if, if you ignore the other problems, such as determining the hw format at all early enough.)
* vo_opengl: vaapi: remove unnecessary loopGravatar wm42015-09-25
| | | | Not sure what I was thinking.
* vo_opengl: vaapi: fix cleanupGravatar wm42015-09-25
| | | | | | | Don't call eglDestroyImageKHR() on the same ID possibly more than once. Clear the image reference on termination, or we would leak up to 1 image per VO recreation.
* vo_opengl: support new VAAPI EGL interopGravatar wm42015-09-25
Should work much better than the old GLX interop code. Requires Mesa 11, and explicitly selecting the X11 EGL backend with: --vo=opengl:backend=x11egl Should it turn out that the new interop works well, we will try to autodetect EGL by default. This code still uses some bad assumptions, like expecting surfaces to be in NV12. (This is probably ok, because virtually all HW will use this format. But we should at least check this on init or so, instead of failing to render an image if our assumption doesn't hold up.) This repo was a lot of help: https://github.com/gbeauchesne/ffvademo The kodi code was also helpful (the magic FourCC it uses for EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does not actually exist). (This is the 3rd VAAPI GL interop that was implemented in this player.)