| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Notes:
- Unfortunately the only way to talk to EGL from within DRM I could find
involves linking with GBM (generic buffer management for Mesa.)
Because of this, I'm pretty sure it won't work with proprietary NVidia
drivers, but then again, last time I checked NVidia didn't offer
proper screen resolution for VT.
- VT switching doesn't seem to work at all. It's worth mentioning that
using vo_drm before introduction of VT switcher had an anomaly where
user could switch to another VT and input text to it, while video
played on top of that VT. However, that isn't the case with drm_egl:
I can't switch to other VT during playback like this. This makes me
think that it's either a limitation coming from my firmware or from
EGL/KMS itself rather than a bug with my code. Nonetheless, I still
left (untestable) VT switching code in place, in case it's useful to
someone else.
- The mode_id, connector_id and device_path should be configurable for
power users and people who wish to watch videos on nonprimary screen.
Unfortunately I didn't see anything that would allow OpenGL backends
to register their own set of options. At the same time, adding them to
global namespace is pointless.
- A few dozens of lines could be shared with vo_drm (setting up VT
switching, most of code behind page flipping). I don't have any strong
opinion on this.
- Sometimes I get minor visual glitches. I'm not sure if there's a race
condition of some sort, unitialized variable (doubtful), or if it's
buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)
- .config and .control are very minimal.
Signed-off-by: wm4 <wm4@nowhere>
|
| |
|
|
|
|
|
| |
This was once done when old versions of libva without VPP had to be
supported. Some parts of it were removed earlier already.
|
|
|
|
|
|
|
|
| |
0.34 and 0.35 don't have the buffer API, such as vaAcquireBufferHandle.
This is only needed for the EGL interop, but why bother staying
compatible for such old things (0.36 was released over a year ago).
We also can drop some minor compatibility ifdeffery.
|
|
|
|
|
|
| |
This AVPacket field was a hack against the fact that the duration field
was merely an int (too small for things like subtitle durations). Newer
libavcodec drops this field and makes duration 64 bit.
|
|
|
|
|
|
|
|
|
| |
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
|
|
|
|
|
|
| |
Pretty trivial with the new EGL interop.
Fixes #478.
|
|
|
|
|
| |
Move the ugliness from x11egl.c to common.c, so that the ugliness
doesn't have to be duplicated in wayland.c.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.
The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
|
|
|
|
|
|
|
| |
It looks like my hope that we can unconditionally include EGL headers in
the OpenGL code is not coming true, because OSX does not support EGL at
all. So I prefer loading the VAAPI EGL/GL specific extensions manually,
because it's less of a mess. Partially reverts commit d47dff3f.
|
|
|
|
| |
Fixes #2344.
|
|
|
|
|
|
|
|
|
| |
This makes it much faster if the surface is really mapped from GPU
memory. It's slightly slower than system memcpy if used on system
memory. We don't really know definitely in which type of memory
it's located, so we use the GPU memcpy in all cases.
Fixes #2317.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make the GPU memcpy from the dxva2 code generally useful to other parts
of the player.
We need to check at configure time whether SSE intrinsics work at all.
(At least in this form, they won't work on clang, for example. It also
won't work on non-x86.)
Introduce a mp_image_copy_gpu(), and make the dxva2 code use it. Do some
awkward stuff to share the existing code used by mp_image_copy(). I'm
hoping that FFmpeg will sooner or later provide a function like this, so
we can remove most of this again. (There is a patch, bit it's stuck in
limbo since forever.)
All this is used by the following commit.
|
|
|
|
|
|
| |
Broken by commit d47dff3f. If something is going to include EGL.h,
header_fixes.h has to know. This definitely affected vo_rpi, and
probably affects wayland builds (with x11egl didabled) as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Should work much better than the old GLX interop code. Requires Mesa 11,
and explicitly selecting the X11 EGL backend with:
--vo=opengl:backend=x11egl
Should it turn out that the new interop works well, we will try to
autodetect EGL by default.
This code still uses some bad assumptions, like expecting surfaces to be
in NV12. (This is probably ok, because virtually all HW will use this
format. But we should at least check this on init or so, instead of
failing to render an image if our assumption doesn't hold up.)
This repo was a lot of help: https://github.com/gbeauchesne/ffvademo
The kodi code was also helpful (the magic FourCC it uses for
EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and
EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does
not actually exist).
(This is the 3rd VAAPI GL interop that was implemented in this player.)
|
|
|
|
|
|
|
|
|
|
| |
The VAAPI EGL interop code will need access to the X11 Display. While
GLX could return it from the current GLX context, EGL has no such
mechanism. (At least no standard one supported by all implementations.)
So mpv makes up such a mechanism.
For internal purposes, this is very rather awkward solution, but it's
needed for libmpv anyway.
|
|
|
|
|
|
|
|
|
| |
These extensions use a bunch of EGL types, so we need to include the EGL
headers in common.h to use our GL function loader with this.
In the future, we should probably require presence of the EGL headers to
reduce the hacks. This might be not so simple at least with OSX, so for
now this has to do.
|
|
|
|
|
| |
Originally, this was disabled, because it was useless and I was afraid
it could possibly break autodetection and proper fallback.
|
|
|
|
|
|
|
|
|
|
|
| |
It's possible to build vf_vapoursynth with either the Python or Lua
backend (or both or none). The check for the vapoursynth core itself was
hidden away and couldn't be disabled, which would link mpv with
vapoursynth even if all backends were disabled.
Rearrange the checks so that the core will be disabled if no backend is
found (or both are disabled). This duplicates the check for
vapoursynth.pc, but since it's trivial, this is not that bad.
|
|
|
|
|
|
| |
These were normalized and are saner now. We want to use the new fields,
and also get rid of the deprecation warnings, so use them. There's no
release yet which uses these, so some ifdeffery is unfortunately needed.
|
|
|
|
|
|
|
| |
Some users still use this filter, so the filter was going to be kept.
But I overlooked that libavfilter provides this filter. Remove the
redundant wrapper from mpv. Something like --af=lavfi=bs2b should work
and give exactly the same results.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All of these filters are considered not useful anymore by us. Some have
replacements in libavfilter (useable through af_lavfi).
af_center, af_extrastereo, af_karaoke, af_sinesuppress, af_sub,
af_surround, af_sweep: pretty simple and useless filters which probably
nobody ever wants.
af_ladspa: has a replacement in libavfilter.
af_hrtf: the algorithm doesn't work properly on most sources, and the
implementation was buggy and complicated. (The filter was inherited from
MPlayer; but even in mpv times we had to apply fixes that fixed major
issues with added noise.) There is a ladspa filter if you still want to
use it.
af_export: I'm not even sure what this is supposed to do. Possibly it
was meant for GUIs rendering audio visualizations, but it couldn't
really work well. For example, the size of the audio depended on the
samplerate (fixed number of samples only), and it couldn't retrieve the
complete audio, only fragments. If this is really needed for GUIs, mpv
should add native visualization, or a proper API for it.
|
|
|
|
| |
This reverts commit 1b7883a3e57a39edfd39732efde5cd02417e3ebd.
|
|
|
|
|
| |
Even though the rpi check fails, it'll define "HAVE_RPI 1" in config.h.
Why? Who knows...
|
|
|
|
|
|
|
|
| |
Slightly faster than using the dispmanx mess (perhaps to a large amount
due to the rather stupid C-only unoptimized ASS->RGBA blending code).
Do this by reusing vo_opengl's subtitle renderer, and vo_opengl's RPI
backend.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This works similar to the existing .rar support, but uses libarchive.
libarchive supports a number of formats, including zip and (most of)
rar.
Unfortunately, seeking does not work too well. Most libarchive readers
do not support seeking, so it's emulated by skipping data until the
target position. On backwards seek, the file is reopened. This works
fine on a local machine (and if the file is not too large), but will
perform not so well over network connection.
This is disabled by default for now. One reason is that we try
libarchive on every file we open, before trying libavformat, and I'm not
sure if I trust libarchive that much yet. Another reason is that this
breaks multivolume rar support. While libarchive supports seeking in
rar, and (probably) supports multivolume archive, our support of
libarchive (probably) does not. I don't care about multivolume rar, but
vocal users do.
|
|
|
|
|
|
|
| |
While the "old" libavcodec vdpau API is not deprecated (only the very-
old API is), it's still relatively complicated code that badly
duplicates the much simpler newer vdpau code. It exists only for the
sake of older FFmpeg releases; get rid of it.
|
|
|
|
|
| |
These share some code (frequencies.c at least), so linking was failing.
Fix by making pvr depend on tv.
|
|
|
|
|
| |
This contains code for the VT and VDA case, thus must be build if at
least 1 of them is enabled.
|
|
|
|
|
|
|
|
| |
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
|
| |
|
|
|
|
|
| |
Previous code did not retrigger a relink when version.h changed since it
didn't use a waf task.
|
|
|
|
|
|
|
|
|
|
|
| |
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
|
|
|
|
|
| |
The individual library versionsd are pretty useless. This will actually
tell us at least the git hash or git tag of the FFmpeg build.
|
|
|
|
|
|
|
|
|
| |
Until now, it only used the hash from the previous configure run,
instead of trying to get the latest hash. The "old" build system did
this correctly - we just have to use the existing logic in version.sh.
Since waf supports separate build dirs, extend version.sh with an
argument for setting the path of version.h.
|
| |
|
|
|
|
|
|
|
|
|
| |
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.
This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
|
|
|
|
|
|
|
|
|
| |
The hardware always decodes to nv12 so using this image format causes less cpu
usage than uyvy (which we are currently using, since Apple examples and other
free software use that). The reduction in cpu usage can add up to quite a bit,
especially for 4k or high fps video.
This needs an accompaning commit in libavcodec.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This unbreaks compiling command line player and libmpv at the same
time. The problem was that doing so silently disabled the OSX
application thing - but the command line player can not use the
vo_opengl Cocoa backend without it.
The OSX application code is basically dead in libmpv, but it's not
that much code anyway.
If you want a mpv binary that does not create an OSX application
singleton (and creates a menu etc.), you must disable cocoa
completely, as cocoa can't be used anyway in this case.
|
| |
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use texture-from-pixmap instead of vaapi's "native" GLX support.
Apparently the latter is unused by other projects. Possibly it's broken
due that, and Intel's inability to provide anything non-broken in
relation to video.
The new code basically uses the X11 output method on a in-memory pixmap,
and maps this pixmap as texture using standard GLX mechanisms. This
requires a lot of X11 and GLX boilerplate, so the code grows. (I don't
know why libva's GLX interop doesn't just do the same under the hood,
instead of bothering the world with their broken/unmaintained "old"
method, whatever it did. I suspect that Intel programmers are just
genuine sadists.)
This change was suggested in issue #1765.
The old GLX support is removed, as it's redundant and broken anyway.
One remaining issue is that the first vaPutSurface() call fails with an
unknown error. It returns -1, which is pretty strange, because vaapi
error codes are normally positive. It happened with the old GLX code
too, but does not happen with vo_vaapi. I couldn't find out why.
|
|
|
|
|
|
|
| |
It was already accidentally used unconditionally by command.c.
Apparently this worked well for us, so don't change anything about,
but should it be unavailable, fail at configure time instead of compile
time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
|
|
|
|
|
|
|
|
|
|
|
| |
Why did this exist in the first place? Other than being completely
useless, this even caused some regressions in the past. For example,
there was the case of a laptop exposing its accelerometer as joystick
device, which led to extremely fun things due to the default mappings of
axis movement being mapped to seeking.
I suppose those who really want to use their joystick to control a media
player (???) can configure it as mouse device or so.
|
|
|
|
| |
It's much easier to configure remotes as X11 input devices.
|
|
|
|
|
|
|
| |
We've been prefering the libavcodec mp3 decoder for half a year now.
There is likely no benefit at all for using the libmpg123 one. It's just
a maintenance burden, and tricks users into thinking it's a required
dependency.
|
| |
|
|
|
|
|
| |
There's no reason to do finegrained checks for libraries which always
must be present. It also reduces the number of extra dependencies.
|