| Commit message (Collapse) | Author | Age |
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
|
|
|
|
|
|
| |
Codecs for hardware acceleration are not blacklisted, but whitelisted.
Also, if this emssage is printed, the codec might not have any hardware
acceleration support in the first place.
|
|
|
|
| |
This was requested. Apparently some find the old mesage confusing.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of "vaapi", simply by changing the probe order.
"vaapi" uses the GLX GL interop, which has causing us more problems than
it solved.
Unfortunately this leads also to copying if "--hwdec=auto --vo=vaapi" is
used, even though GLX is not involved in this case - but I don't care
enough to make the probe logic cleverer just for this. You can still get
the zero-copy path with --hwdec=vaapi.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Omitted a simple, but devastasting check. Fixed the relevant commits
now.
This reverts commit 8d24e9d9b8ad1b5d82139980eca148dc0f4a1eab.
diff --git a/video/out/gl_video.c b/video/out/gl_video.c
index 9c8a643..f1ea03e 100644
--- a/video/out/gl_video.c
+++ b/video/out/gl_video.c
@@ -1034,9 +1034,9 @@ static void compile_shaders(struct gl_video *p)
shader_def_opt(&header_conv, "USE_CONV_GAMMA", use_conv_gamma);
shader_def_opt(&header_conv, "USE_CONST_LUMA", use_const_luma);
shader_def_opt(&header_conv, "USE_LINEAR_LIGHT_BT1886",
- gamma_fun == MP_CSP_TRC_BT_1886);
+ use_linear_light && gamma_fun == MP_CSP_TRC_BT_1886);
shader_def_opt(&header_conv, "USE_LINEAR_LIGHT_SRGB",
- gamma_fun == MP_CSP_TRC_SRGB);
+ use_linear_light && gamma_fun == MP_CSP_TRC_SRGB);
shader_def_opt(&header_conv, "USE_SIGMOID", use_sigmoid);
if (p->opts.alpha_mode > 0 && p->has_alpha && p->plane_count > 3)
shader_def(&header_conv, "USE_ALPHA_PLANE", "3");
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97dade07f4d7e0d6c208bcd3493e712ed
653b0dd5295453d9661f673b4ebd02c5ceacf645
729c8b3f641e633474be612e66388c131a1b5c92
fbacd5de31de964f7cd562304ab1c9b4a0d76015
Fixes #1636.
|
|
|
|
|
| |
We now actually use the TRC tagging information lavc provides us with,
instead of always manually guessing.
|
|
|
|
|
|
|
| |
Remove coded_width and coded_height. This was originally added in commit
fd7dde40, when BITMAPINFOHEADER was killed. The separate fields became
redundant in commit e68f4be1. Remove them (nothing passed to the
decoders actually changes with _this_ commit).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent behavior change in libavcodec's h264 decoder keeps at least 1
surface even after avcodec_flush_buffers() has been called. We used to
flush the decoder in order to make sure all surfaces are free'd, so that
the hw decoder can be safely uninitialized. This doesn't work anymore.
Fix it by closing the AVCodecContext before the hw decoder is
uninitialized. This is actually simpler and more robust. It seems to be
well-supported too.
Fixes invalid read accesses with vaapi-copy and dxva2-copy. These
destroyed the hwdec API fully on uninit, and could not deal with
surfaces surviving the decoder.
Probably fixes #1587.
|
|
|
|
|
|
|
|
| |
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.
Also apply a minimal cleanup to fmt-conversion.c (unrelated).
|
|
|
|
| |
Also, don't use av_log() for mpv output.
|
|
|
|
| |
The ctx->pic check must uninitialize the decoder.
|
|
|
|
| |
Fixes #1337.
|
|
|
|
|
| |
This is "better", although nobody seems to know how this API is supposed
to work at all.
|
|
|
|
| |
Fixes #1307.
|
|
|
|
|
|
| |
This way, no surfaces are in use when uninitializing the hw decoders,
which might help with -copy hw decoders (normal hw decoding is not
affected).
|
|
|
|
|
| |
Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug
it yourself.
|
|
|
|
|
| |
The private context wasn't free'd when codec init failed. Restructure
the code so that it can't happen.
|
|
|
|
|
|
| |
This was once central, but now it's almost unused. Only vf_divtc still
uses it for extremely weird and incomprehensible reasons. The use in
stream.c is trivial. Replace these, and remove mpbswap.h.
|
|
|
|
|
|
|
|
|
|
| |
MPlayer traditionally did this because it made sense: the most important
formats (avi, asf/wmv) used Microsoft formats, and many important
decoders (win32 binary codecs) also did. But the world has changed, and
I've always wanted to get rid of this thing from the codebase.
demux_mkv.c internally still uses it, because, guess what, Matroska has
a VfW muxing mode, which uses these data structures natively.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
|
|
|
|
|
|
|
|
|
| |
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
|
|
|
|
| |
This makes a certain corner case simpler at a later point.
|
|
|
|
|
|
|
|
|
| |
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.
|
|
|
|
|
|
|
|
| |
Use OPT_KEYVALUELIST() for all places where AVOptions are directly set
from mpv command line options. This allows escaping values, better
diagnostics (also no more "pal"), and somehow reduces code size.
Remove the old crappy option parser (av_opts.c).
|
|
|
|
|
|
|
| |
This add support for reading primary information from lavc, categorized
into BT.601-525, BT.601-625, BT.709 and BT.2020; and passes it on to the
vo. In vo_opengl, we always generate the 3dlut against the wider BT.2020
and transform our source into this colorspace in the shader.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
|
|
|
|
|
|
|
| |
This means use of the min/max fields can be dropped for the flag option
type, which makes some things slightly easier. I'm also not sure if the
client API handled the case of flag not being 0 or 1 correctly, and this
change gets rid of this concern.
|
| |
|
|
|
|
|
| |
Removes specifics from options.h and options.c, and puts everything into
vd_lavc.c.
|
|
|
|
|
|
|
| |
While I'm not very fond of "const", it's important for declarations
(it decides whether a symbol is emitted in a read-only or read/write
section). Fix all these cases, so we have writeable global data only
when we really need.
|
|
|
|
| |
Set the bitrate of dec_video if it is available in avcodec.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each
of these has emulation wrappers. The wrappers are usually slower and
have fewer features than their native opposites. In particular the libva
vdpau driver is practically unmaintained.
Check the vendor string and print a warning if emulation is detected.
Checking vendor strings is a very stupid thing to do, but I find the
thought of people using an emulated API for no reason worse.
Also, make --hwdec=auto never use an API that is detected as emulated.
This doesn't work quite right yet, because once one API is loaded,
vo_opengl doesn't unload it, so no hardware decoding will be used if the
first probed API (usually vdpau) is rejected. But good enough.
|
| |
|
|
|
|
|
| |
Hwaccel 1.2 populates only the third data field and assumes
that the AVCodecContext is available to the dealloc function.
|
| |
|
|
|
|
|
| |
Now the rotation hint is propagated everywhere. It just isn't used
anywhere yet.
|
|
|
|
|
| |
Needed in theory. I don't know if there are even any real-world files
which change the profile mid-stream.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Instead of doing it on every seek (libavcodec calls get_format on every
seek), reinitialize the decoder only if the video resolution changes.
Note that this may be relatively naive, since we e.g. (or: in
particular) don't check for profile changes. But it's not worse than the
state before the get_format change, and at least it paints over the
current vaapi breakage (issue #646).
|
|
|
|
|
|
|
| |
All this code was needed for compatibility with very old libavcodec
versions only (such as Libav 9).
Includes some now-possible simplifications too.
|
|
|
|
|
|
|
|
| |
This "sometimes" crashed when seeking. The fault apparently lies in
libavcodec: the decoder returns an unreferenced frame! This is
completely insane, but somehow I'm apparently still expected to
work this around. As a reaction, I will drop Libav 9 support in the
next commit. (While this commit will go into release/0.3.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apparently the "right" place to initialize the hardware decoder is in
the libavcodec get_format callback.
This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and
vdpau_old.c is probably going to be removed soon (if Libav ever manages
to release Libav 10). So for now the init_decoder callback added with
this commit is optional.
This also means vdpau.c and vaapi.c don't have to manage and check the
image parameters anymore.
This change is probably needed for when libavcodec VDA supports gets a
new iteration of its API.
|
|
|
|
|
|
|
|
|
|
|
| |
Like with the previous commit, this is probably not needed, but it's
unclear whether that really is the case. Most likely, it used to be
needed by some demuxer, and now the only demuxer left that could
_possibly_ trigger this is demux_mkv.c.
Note that mjpeg is the only decoder that reads the extra_huff option,
and nothing in libavformat actually sets the option. So maybe it's
fundamentally not needed anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This case can't happen with the normal realvideo codepath in
demux_mkv.c, because the code would errors out if the extradata is too
small, and everything would be broken anyway in the case the vd_lavc.c
condition is actually triggered.
It still might happen with VfW-muxed realvideo in Matroska, though.
Basically, I'm hoping this doesn't matter anyway, and that the vd_lavc.c
code was for other old demuxers, like demux_avi or demux_rm. Following
the commit history, it's not really clear for what demuxer this code
was added.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Set the flag CODEC_FLAG_OUTPUT_CORRUPT by default. Note that there is
also CODEC_FLAG2_SHOW_ALL, which is older, but this seems to be ffmpeg
only.
Note that whether you want this enabled depends on the user. Some might
prefer that only good frames are output, while others want the decoder
to try as hard as possible to output _anything_. Since mplayer/mpv is
rather the kind of player that tries hard instead of being "clever", set
the new default to override libavcodec's default.
A nice way to test this is switching video tracks. Since mpv doesn't
wait for the next key frame, it'll start feeding the decoder with a
packet from the middle of the stream.
|