| Commit message (Collapse) | Author | Age |
|
|
|
| |
Why is everything so horrible.
|
|
|
|
|
|
|
|
| |
This uses the normal autoprobing rules like "auto", but rejects anything
that isn't flagged as copying data back to system memory.
The chunk in command.c was dead code, so remove it instead of updating
it.
|
|
|
|
|
|
| |
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses EGL_ANGLE_stream_producer_d3d_texture_nv12 and related
extensions to map the D3D textures coming from the hardware decoder
directly in GL.
In theory this would be trivial to achieve, but unfortunately ANGLE does
not have a mechanism to "import" D3D textures as GL textures. Instead,
an awkward mechanism via EGL_KHR_stream was implemented, which involves
at least 5 extensions and a lot of glue code. (Even worse than VAAPI EGL
interop, and very far from the simplicity you get on OSX.)
The ANGLE mechanism so far supports only the NV12 texture format, which
means 10 bit won't work. It also does not work in ES3 mode yet. For
these reasons, the "old" ID3D11VideoProcessor code is kept and used as a
fallback.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
|
|
|
|
| |
Fixes hardware decoding of most mpeg2 things.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes #3097
|
|
|
|
| |
fixes #3092
|
|
|
|
| |
Slight simplification, IMHO.
|
|
|
|
|
|
|
|
| |
In particular, this moves the depth test to common code.
Should be functionally equivalent, except that for DXVA2, the
IDirectXVideoDecoderService_GetDecoderRenderTargets API is called
more often potentially.
|
|
|
|
| |
Gets rid of some silliness, and might be useful in the future.
|
|
|
|
|
|
|
|
|
| |
Basically this gets rid of the need for the accessors in d3d11va.h, and
the code can be cleaned up a little bit.
Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer
in the last plane pointers, but it tolerates/passes through the other
plane pointers we set.
|
|
|
|
|
|
| |
We want to prefer d3d11va over dxva2 anything. But since dxva2 copyback
is more efficient than d3d11va's currently, d3d11va-copy should come
last.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.
ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.
The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.
I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
|
|
|
|
|
| |
I guess this won't ever be fixed properly in FFmpeg. Too hairy, and the
alternative (using VideoToolbox as "full decoder") is too attractive.
|
|
|
|
| |
These were for ancient libavcodec versions.
|
|
|
|
|
|
|
| |
For Mediacodec in particular we don't care about the format. It can just
decode to whatever it wants. The only case we would care about is it not
returning an opaque format if we don't have proper interop, but
libavcodec always returns non-opaque formats by default.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the recently added lavc_suffix mechanism to select the wrapper
decoder.
With all hwdec callbacks being optional, and RPI/Mediacodec having only
dummy callbacks, all the callbacks can be removed as well.
The result is that the vd_lavc_hwdec struct for both of them is tiny.
It's better to move them to vd_lavc.c directly, because they are so
trivial and small.
|
| |
|
|
|
|
|
| |
This is a bit sketchy, as there isn't a truly standard way to
communicate the timebase.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is intended for cases when --hwdec needs to override the decoder
implementation in use, like for example on the RPI.
It does two things:
1. Allow the hwdec to indicate a decoder suffix. libavcodec by
convention adds a suffix to all wrapper decoders, and here we start
relying on it. While not necessarily the best idea, it's the only
thing we got. libavcodec's hwaccel list is useless, because it only
has the codec ID, not the associated decoder's name.
2. Make --hwdec=auto work properly. It shouldn't fail anymore, and hwdec
probing should reliably work, even if a different decoder is selected
with --vd. The semantics of --hwdec should dictate that it overrides
the default decoder.
|
|
|
|
| |
In case of errors or whatever.
|
|
|
|
| |
Damn.
|
|
|
|
|
|
| |
This seems to cause problems, so only use it if H264_E is not available.
fixes #3059
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
|
|
|
|
|
|
| |
Commit f009d16f accidentally broke it.
Thanks to RiCON for noticing and testing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The underlying intention of this code is to make changing
--videotoolbox-format at runtime work. For this reason, the format can't
just be statically setup, but must be read from the option at runtime.
This means the format is not fixed anymore, and we have to make sure the
renderer is property reinitialized if the format changes. There is
currently no way to trigger reinit on this level, which is why the
mp_image_params.hw_subfmt field was introduced.
One sketchy thing remains: normally, the renderer is supposed to be
involved with VO format negotiation, which would ensure that the VO
can take the format at all. Since the hw_subfmt is not part of this
format negotiation, it's implied the get_vt_fmt() callback only
returns formats supported by the renderer. This is not necessarily
clear because vo_opengl checks this with converted_imgfmt separately.
None of this matters in practice though, because we know all formats
are always supported.
(This still requires somehow triggering decoder reinit to make the
change effective.)
|
|
|
|
|
|
|
|
|
|
| |
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
|
|
|
|
|
|
|
|
|
|
| |
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
|
|
|
|
|
|
|
|
|
| |
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
|
|
|
|
| |
This is basically a full rewrite to make it look more like d3d11va.c
|
|
|
|
|
|
| |
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
|
|
|
|
|
| |
Does the same thing as the rpi one - makes fallback possible by
pretending that h264_mediacodec is a hwdec.
|
| |
|
|
|
|
| |
For now only found in Libav.
|
|
|
|
|
|
| |
The mp_set_av_packet()/mp_pts_from_av() functions check whether the
timebase is set at all (i.e. AVRational.num!=0), so there's no need to
fiddle with pointers.
|
|
|
|
| |
Regression since commit 6640b22a.
|
|
|
|
| |
Not quite sure when/why exactly this was broken.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of displaying it only on playback start (or after switching
tracks), always display it even after a seek.
This helps with --lavfi-complex. You can now overlay e.g. audio
visualizations over cover art, and it won't break after a seek.
The downside is that this might make seeks with huge cover art slower.
There is also a glitch on seeking: since cover art pictures always
have timestamp 0, the playback time will be 0 for a moment after seek,
and then revert to audio PTS (as video is considered EOF). This is also
due to how lavfi's overlay filter behaves. (I'm not sure how to tell
lavfi that it's just a single frame.)
|
|
|
|
|
| |
Almost only a cosmetic change, although it decreases pointless
referencing/dereferencing of the cover art packet too.
|
|
|
|
|
|
|
|
|
|
| |
Deselecting cover art and then reselecting it did not work. The second
time the cover art picture is not displayed again. (This seems to break
every other month...)
The reason is commit 6640b22a. It mutates the input packet. And it is
correct that we don't own d_video->header->attached_picture at this
point. Fix it by creating a new packet reference.
|
|
|
|
|
|
|
|
|
|
| |
Completely pointless abominations that FFmpeg refuses to remove. They
are ancient, long deprecated API which we can't use anymore. They
confused users as well.
Pretend that they don't exist. Due to the way --vd works, they can't
even be forced anymore. The older hack which explicitly rejects these
can be dropped as well.
|
|
|
|
|
|
|
|
|
|
|
| |
Hr-seek was often off by one frame due to rounding issues, which have
been traditionally taken care off by adding a "tolerance". Essentially,
frames very close to the seek target PTS are not dropped, even if they
may strictly are before the seek target.
Commit 0af53353 accidentally removed this by always removing frames even
if they're within the "tolerance". Fix this by "unsharing" the logic and
making sure the segment code is inactive for normal seeks.
|
|
|
|
|
| |
Instead of checking whether the format is a hwaccel format, check
whether it's the exact format we've requested for hardware decoding.
|
|
|
|
|
|
|
|
|
|
|
| |
Doing --hwdec=auto ends up picking dxva2, creating a decoder, and then
sending D3D frames down the video chain, which immediately fails and
falls back to software.
Consider dxva2 only if the VO provides a context. If this fails,
autoprobing will proceed to try dxva2-copy as usual.
Fixes #2844.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is in preparation for a hypothetical API change in libavcodec,
which would allow the decoder to return multiple video frames before
accepting a new input packet.
In theory, the body of the if() added to vd_lavc.c could be replaced
with this code:
packet->buffer += ret;
packet->len -= ret;
but currently this is not needed, as libavformat already outputs one
frame per packet. Also, using libavcodec this way could lead to a
"deadlock" if the decoder refuses to consume e.g. garbage padding, so
enabling this now would introduce bugs.
(Adding this now for easier testing, and for symmetry with the audio
code.)
|
|
|
|
|
|
|
|
|
|
| |
There is some strange code which sets the DTS of the packet to PTS (but
only if it's not AVI), which apparently helps with timestamp
determination with some broken files. This code is annoying because it
tries to avoid mutating the packet (which it logically doesn't own).
Move it to where it does and get rid of the packet_copy mess.
Needed for the following commit.
|
|
|
|
|
|
|
| |
This tries to determine whether packet PTS values are accurate and can
be used for frame dropping during seeking. Move both checks (PTS is
missing; PTs is non-monotonic) to the earliest place where they can be
done.
|