| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
Only do it when the number of threads is autodetected, as more than 16
threads are still considered not recommended. (libavcodec prints a
warning.)
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit fb8d15836695e883355c5ec6ff8463e7bbf39461.
Reallocating the FBOs on every resize is very slow. It affects resizing
the window, as well as changing the video size itself with e.g.
panscan. Since the original change was done based on a single user
complaint, but the change itself caused a lot of complaints, we decided
to just revert it.
|
|
|
|
| |
In particular, get rid of the EUSERBROKEN message.
|
|
|
|
|
|
|
|
|
|
| |
Instead of calling it "future frames" and adding or subtracting 1 from
it, always call it "requested frames". This simplifies it a bit.
MPContext.next_frames had 2 added to it; this was mainly to ensure a
minimum size of 2. Drop it and assume VO_MAX_REQ_FRAMES is at least 2;
together with the other changes, this can be the exact size of the
array.
|
|
|
|
|
| |
This was requested multiple times by users, and it's not hard to
implement and/or maintain.
|
|
|
|
|
| |
This was supposed to have changed back when oversample was reintroduced
in 3007250. Fixes #2155.
|
|
|
|
|
|
|
|
|
| |
This was requested by someone.
All code was written by myself; some minor changes by 2 contributors who
agreed to general LGPL relicensing. 1 line of code is by someone unknown
who possibly wasn't asked (setting the "display_fps" variable), and
which can be reasonably ignored as it makes up only 0.1% of the file.
|
|
|
|
|
|
|
|
|
|
|
|
| |
I still have no idea why this is needed, maybe some weird off-by-one
in some shitty driver? Either way, the difference for a working setup
shouldn't be too major, the most noticeable effect would be somewhat worse
performance when resizing the video during playback with interpolation
enabled using the mouse.
That's a specific enough side effect for me to not care as much about it.
Fixes #1814.
|
|
|
|
|
|
| |
Generates too much discussion and confusion.
Fixes #2051.
|
|
|
|
|
|
|
|
| |
There are some situations when redrawing is requested, but the current
frame was deleted. This could happen when switching e.g. hw decoding
mid-stream.
Separate uploading/drawing and fix the condition.
|
|
|
|
|
|
|
| |
Just avoid some code duplication. Also, gl_video_set_options() having a
queue size output parameter is weird at best. While I don't appreciate
that this commit suddenly requires gl_video.c to deal with vo.c directly
in a special case, it's simply the best place to put this function.
|
|
|
|
|
|
| |
That was 2 too many.
Also fix a documentation comment.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The VO will be provided with future frames even if the format changes
mid-stream. This caused a crash if these frames were actually used (i.e.
interpolation mode was enabled).
Fixes a crash when deinterlacing is toggled during playback, and the
deinterlacer changes the stream format (as it can happen e.g. if the
decoder outputs nv12, which in turn happens with hw decoding).
(On a side note, future frames are always non-NULL. Also, the current
frame is of course always in the correct format.)
|
|
|
|
| |
Appears to be required by some hardware. Whatever.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vaQueryImageFormats() returns a randomly ordered list - so we shouldn't
assume the first format on the list which works is the best. This
effectively switches to nv12 instead of yuv420p on some drivers.
We handle this by reusing va_to_imgfmt[], and ordering it by preference.
We hardcode that GPUs prefer nv12 pver yuv420p. In theory we could do
complicated probing (allocate dummy surface + use vaDeriveImage on it,
then retrieve the FourCC) - but all things which could break assumption
in the future are not supported yet (like 10 bit or 4:4:4), so this is
fine.
|
|
|
|
|
|
| |
This reverts commit d660e67be9cc7d79d81e0c09c2720ea6d0a35e3a.
Fixes #2123.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes problems with --vo=opengl:interpolation. The issue here is that
vo_opengl retains more surfaces than what was preallocated for the
decoder. Until now, we just explicitly failed to decode frames for which
no additional surfaces are available. Since modern drivers usually are
fine with not "registering" surfaces before the decoder is created, just
allow allocating additional surfaces if needed.
(We also could probably recreate the HW decoder, since the HW decoder
should be stateless. But let's try to avoid raising the overall
complexity of the code.)
|
|
|
|
|
| |
After recent changes, there is no reason why gl_video_set_image() should
exist anymore. So merge it back into gl_video_upload_image().
|
|
|
|
|
|
|
| |
The interlaced frame test needs to be aware that the input mpi might be
NULL - this happens at the end of a stream when the input frames have
all been submitted but frames still need to be drained from the
decoder.
|
|
|
|
|
|
|
|
|
| |
Outputting the detected OpenGL features was useless and redundant with
the extension loading output.
Also, remove MPGL_CAP_3D_TEX from OpenGL(ES) 3.0. This block didn't
include the glTexImage3D function, so that was pointless and couldn't
have worked. The OpenGL 2.1 block does it correctly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VDPAU has explicit support for rotating surfaces, and it is far less
expensive than using the normal rotation filter (which would require
reading video frames back into system memory), it is desirable to
implement the VO rotation capability.
To do this, we need to render the video frames to an output surface,
without rotation, and then render from that surface to the final
output surface to apply the rotation. It is important that the
intermediate surface is the same size as the final one (only not
rotated) so that hqscaling can be applied if requested by the user.
(hqscaling is a mixer capability and so takes effect when the video
surface is rendered to an output surface)
Finally, we must remember to explicitly clear the final output
surface as VDPAU only auto-clears output surfaces when rendering video
surfaces.
|
|
|
|
| |
Closes #2102.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally, vdpau decoded frames are passed directly to a suitable
vo (vo_vdpau or vo_opengl) without ever touching system memory. This
is efficient for output purposes, but prevents any of the regular
filters from being used with such frames.
This new filter implements a read-back step to pull the frames back
into system memory where they can be acted on by other filters.
Eventually the frames will be sent to the vo as if they were normal
software-decoded frames.
Note that a vdpau compatible vo must still be used to ensure that
the decoder is properly initialised.
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
| |
Fixes #2111.
|
|
|
|
| |
Leave the libavfilter wrapper only.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Some code called by vf_vdpaupp.c calls mp_image_new_custom_ref(), but
out of convenience doesn't reset the buffers. Make this behavior ok.
(The assert() was there to catch usage errors, but the same error could
already happen before the refcount changes were made, so the check is
not overly helpful.)
Fixes #2115.
|
|
|
|
|
|
|
|
|
|
|
| |
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometime recently, hardware decoding started to fail if h264 with full
reference frames was decoded, and --vo=vaapi was used. VAAPI requires
registering all surfaces that the decoder will ever use in advance, so
if the playback chain uses more surfaces than originally allocated, we
fail and drop back to software decoding.
I'm not really sure why or when this started happening. Commit 7b9d7265
for one is not the cause - it can be reproduced with earlier commits. It
also seems to be timing dependent. Possibly it has to do with the way
vo.c retains previous surfaces, and the way they can be queued/unqueued
asynchronously.
Increasing the number of reserved additional surfaces by 1 fixes it.
(Though I have no idea where exactly all these surfaces are being used.
Or rather, _when_.)
|
|
|
|
|
| |
Also remove the enabled suboption, which did nothing. (It was probably
broken at some point.)
|
|
|
|
|
|
|
|
| |
See manpage additions. This is mainly useful for vo_opengl_cb, but can
also be applied to vo_opengl.
On a side note, gl_hwdec_load_api() should stop using a name string, and
instead always use the IDs. This should be cleaned up another time.
|
|
|
|
|
|
|
|
|
| |
Now there's a "canonical" table for mapping the names, that other code
can use, without having to rely too much on option code magic.
Also, use the central HWDEC constants, instead of magic values. (There
used to be semi-ok reasons to do this, but now it makes no sense
anymore.)
|
|
|
|
|
| |
libmpv users might stop calling the frame render callback for stupid
reasons, at which point video frames would pile up.
|
|
|
|
|
|
| |
Basically, we need to make sure to allocate enough data for the pretty
dumb copy_nv12 function. (It could be avoided by making the function
less dumb, but this fix is simpler.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
|
|
|
|
| |
This is an obscure but theoretically possible bug.
|
|
|
|
|
|
| |
This caused issues with hardware decoding. The VOs by definition dictate
the lifetime of the hardware context, so no surface allocations must
survive the VO. Fixes assertions on exit with vdpau.
|
| |
|
|
|
|
|
|
|
|
|
| |
This is basically a hack for drivers which prevent the mpv DXVA2 decoder
glue from working if OpenGL is in fullscreen mode.
Since it doesn't add any "hard" new API to the client API, some of the
code would be required for a true zero-copy hw decoding pipeline, and
sine it isn't too much code after all, this is probably acceptable.
|
|
|
|
|
| |
Since we still read-back (and don't have hard plans on changing this),
this doesn't have much of an advantage.
|
|
|
|
| |
Preparation for the following commit.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When seeking to a different position, and seeking takes long, the OSD
might get redrawn. This means that the VO will receive a request to
redraw an old frame using whatever the previous PTS was. This breaks the
interpolation logic: the old frame will be added to the queue, and then
the next frames (with lower PTS if you seeked backwards) are not drawn
as the logic assumes they're past frames.
Fix this by using the non-interpolation code path when redrawing after a
seek reset, and no "real" frame has been drawn yet.
It's a recent regression caused by the redrawing code simplification.
The old code simply sent a VOCTRL for redrawing the frame, and the VO
had to deal with retaining the old frame on its own.
This is a hack as in there's probably a better solution.
Fixes #2097.
|
|
|
|
| |
bother vo_vdpau.c, which actually uses these times.
|
| |
|
| |
|
|
|
|
| |
Use the newer internal GL backend API.
|