| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just so I can remove a few lines from dec_sub.c.
This is slightly inelegant, as the whole subtitle file has to be read
into memory, converted at once in memory, and then provided to
libavformat in an awkward way by creating a memory stream instead of
using demuxer->stream. It also won't be possible to force the charset on
subtitles in binary container formats - but this wasn't exposed before,
and we just hope this won't be ever needed. (One motivation was fixing
broken files with non-UTF8 muxed.) It also won't be possible to change
the charset on the fly, but this was not exposed either.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 6d9cb893, subtitle state doesn't survive timeline switches
(ordered chapters etc.). So there is no point in caching the state per
sh_stream anymore (which would be required to deal with multiple
segments). Move the cache to struct track.
(Whether it's worth caching the subtitle state just for the situation
when subtitle tracks get reselected is questionable. But for now, it's
nice to have the subtitles immediately show up when reselecting a
subtitle.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Slightly simpler, and removes the need to pre-read all subtitle packets.
This still does the subtitle charset conversion on the packet level
(instead converting when parsing the file), so in theory this still
could provide a way to change the charset at runtime. But maybe even
this should be removed, as FFmpeg is somewhat likely to get its own
charset detection and conversion mechanism in the future. (Would have
to keep the subtitle file in memory to allow changing the charset on
the fly, I guess.)
|
|
|
|
|
|
| |
At least Matroska files have a "forced" flag (in addition to the
"default" flag). Export this flag. Treat it almost like the default
flag, but with slightly higher priority.
|
|
|
|
|
|
|
| |
MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.
|
|
|
|
|
| |
Why not. "format" sounds too misleading for the actual importance and
meaning of this field.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to handle such video is almost worthless, but it was requested by
at least 2 users.
If there are no timestamps, enable byte seeking by setting
ts_resets_possible. Use the video FPS (wherever it comes from) and the
audio samplerate for timing. The latter was already done by making the
first packet emit DTS=0; remove this again and do it "properly" in a
higher level.
|
| |
|
|
|
|
|
|
|
| |
Remove coded_width and coded_height. This was originally added in commit
fd7dde40, when BITMAPINFOHEADER was killed. The separate fields became
redundant in commit e68f4be1. Remove them (nothing passed to the
decoders actually changes with _this_ commit).
|
|
|
|
|
|
|
|
|
| |
Apparently using the stream index is the best way to refer to the same
streams across multiple FFmpeg-using programs, even if the stream index
itself is rarely meaningful in any way.
For Matroska, there are some possible problems, depending how FFmpeg
actually adds streams. Normally they seem to match though.
|
|
|
|
| |
Don't refer to fields that were removed.
|
|
|
|
|
|
|
| |
See previous commits. This finally replaces directly reading the file
data into a struct with reading them manually. In theory this is more
portable (no alignment issues and other things). For the most part,
it's nice seeing this gone.
|
|
|
|
|
| |
Same as with the previous commit. A bit more involved due to how the
code is written.
|
|
|
|
|
|
|
|
|
|
| |
MPlayer traditionally did this because it made sense: the most important
formats (avi, asf/wmv) used Microsoft formats, and many important
decoders (win32 binary codecs) also did. But the world has changed, and
I've always wanted to get rid of this thing from the codebase.
demux_mkv.c internally still uses it, because, guess what, Matroska has
a VfW muxing mode, which uses these data structures natively.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a while, we used this to transfer PCM from demuxer to the filter
chain. We had a special "codec" that mapped what MPlayer used to do
(MPlayer passes the AF sample format over an extra field to ad_pcm,
which specially interprets it).
Do this by providing a mp_set_pcm_codec() function, which describes a
sample format in a generic way, and sets the appropriate demuxer header
fields so that libavcodec interprets it correctly. We use the fact that
libavcodec has separate PCM decoders for each format. These are
systematically named, so we can easily map them.
This has the advantage that we can change the audio filter chain as we
like, without losing features from the "rawaudio" demuxer. In fact, this
commit also gets rid of the audio filter chain formats completely.
Instead have an explicit list of PCM formats. (We could even just have
the user pass libavcodec PCM decoder names directly, but that would be
annoying in other ways.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, the audio chain could handle both little endian and big
endian formats. This actually doesn't make much sense, since the audio
API and the HW will most likely prefer native formats. Or at the very
least, it should be trivial for audio drivers to do the byte swapping
themselves.
From now on, the audio chain contains native-endian formats only. All
AOs and some filters are adjusted. af_convertsignendian.c is now wrongly
named, but the filter name is adjusted. In some cases, the audio
infrastructure was reused on the demuxer side, but that is relatively
easy to rectify.
This is a quite intrusive and radical change. It's possible that it will
break some things (especially if they're obscure or not Linux), so watch
out for regressions. It's probably still better to do it the bulldozer
way, since slow transition and researching foreign platforms would take
a lot of time and effort.
|
|
|
|
|
|
| |
--hls-bitrate=min/max lets you select the min or max bitrate. That's it.
Something more sophisticated might be possible, but is probably not even
worth the effort.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a thread to the demuxer which reads packets asynchronously.
It will do so until a configurable minimum packet queue size is
reached. (See options.rst additions.)
For now, the thread is disabled by default. There are some corner cases
that have to be fixed, such as fixing cache behavior with webradios.
Note that most interaction with the demuxer is still blocking, so if
e.g. network dies, the player will still freeze. But this change will
make it possible to remove most causes for freezing.
Most of the new code in demux.c actually consists of weird caches to
compensate for thread-safety issues (with the previously single-threaded
design), or to avoid blocking by having to wait on the demuxer thread.
Most of the changes in the player are due to the fact that we must not
access the source stream directly. the demuxer thread already accesses
it, and the stream stuff is not thread-safe.
For timeline stuff (like ordered chapters), we enable the thread for the
current segment only. We also clear its packet queue on seek, so that
the remaining (unconsumed) readahead buffer doesn't waste memory.
Keep in mind that insane subtitles (such as ASS typesetting muxed into
mkv files) will practically disable the readahead, because the total
queue size is considered when checking whether the minimum queue size
was reached.
|
|
|
|
|
|
| |
It's unlikely that files with multiple audio tracks and with replaygain
actually happen, but this change might help avoid minor corner cases
with later changes.
|
|
|
|
| |
Since i_bps now contains bits/sec, rename it to reflect this change.
|
|
|
|
|
|
|
| |
The i_bps members of the sh_audio and dev_video structs are mostly used
for displaying the average audio and video bitrates. Keeping them in
bits-per-second avoids truncating them to bytes-per-second and changing
them back lateron.
|
|
|
|
|
| |
Now the rotation hint is propagated everywhere. It just isn't used
anywhere yet.
|
|
|
|
|
|
|
|
|
|
| |
Instead of parsing the ASS file in demux_libass.c and trying to pass the
ASS_Track to the subtitle renderer, just read all file data in
demux_libass.c, and let the subtitle renderer pass the file contents to
ass_process_codec_private(). (This happens to parse full files too.)
Makes the code simpler, though it also relies harder on the (messy)
probe logic in demux_libass.c.
|
|
|
|
|
|
| |
Before this, it wasn't possible to distinguish MicroDVD subtitles
without FPS header, and subtitles with FPS header equal to FFmpeg's
fallback FPS.
|
|
|
|
|
|
|
|
|
|
|
| |
The mplayer decoder (spudec.c) actually handled this. There was explicit
code for binary palettes (16 32 bit values), and the subtitle resolution
was handled by video resolution coincidentally matching the subtitle
resolution.
Whoever puts vobsub into mp4 should be punished.
Fixes the sample gundam_sample.mp4, closes github issue #547.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178
https://bugzilla.libav.org/show_bug.cgi?id=600
|
| |
|
|
|
|
|
|
|
|
|
| |
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
|
|
|
|
|
|
|
|
|
|
| |
This is similar to the sh_audio commit.
This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).
Also remove all the stheader.h fields that are not needed anymore.
|
|
|
|
|
|
|
|
| |
sh_audio is supposed to contain file headers, not whatever was decoded.
Fix this, and write the decoded format to separate fields in the decoder
context, the dec_audio.decoded field. (Note that this field is really
only needed to communicate the audio format from decoder driver to the
generic code, so no other code accesses it.)
|
|
|
|
|
|
|
|
|
| |
Move all state that basically changes during decoding or is needed in
order to manage decoding itself into a new struct (dec_audio).
sh_audio (defined in stheader.h) is supposed to be the audio stream
header. This should reflect the file headers for the stream. Putting the
decoder context there is strange design, to say the least.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most libavcodec decoders output non-interleaved audio. Add direct
support for this, and remove the hack that repacked non-interleaved
audio back to packed audio.
Remove the minlen argument from the decoder callback. Instead of
forcing every decoder to have its own decode loop to fill the buffer
until minlen is reached, leave this to the caller. So if a decoder
doesn't return enough data, it's simply called again. (In future, I
even want to change it so that decoders don't read packets directly,
but instead the caller has to pass packets to the decoders. This fits
well with this change, because now the decoder callback typically
decodes at most one packet.)
ad_mpg123.c receives some heavy refactoring. The main problem is that
it wanted to handle format changes when there was no data in the decode
output buffer yet. This sounds reasonable, but actually it would write
data into a buffer prepared for old data, since the caller doesn't know
about the format change yet. (I.e. the best place for a format change
would be _after_ writing the last sample to the output buffer.) It's
possible that this code was not perfectly sane before this commit,
and perhaps lost one frame of data after a format change, but I didn't
confirm this. Trying to fix this, I ended up rewriting the decoding
and also the probing.
|
|
|
|
|
|
|
|
|
| |
This member was redundant. sh_audio->sample_format indicates the sample
size already.
The TV code is a bit strange: the redundant sample size was part of the
internal TV interface. Assume it's really redundant and not something
else. The PCM decoder ignores the sample size anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The --deinterlace option does on playback start what the "deinterlace"
property normally does at runtime. You could do this before by using the
--vf option or by messing with the vo_vdpau default options, but this
new option is supposed to be a "foolproof" way.
The main motivation for adding this is so that the deinterlace property
can be restored when using the video resume functionality
(quit_watch_later command).
Implementation-wise, this is a bit messy. The video chain is rebuilt in
mpcodecs_reconfig_vo(), where we don't have access to MPContext, so the
usual mechanism for enabling deinterlacing can't be used. Further,
mpcodecs_reconfig_vo() is called by the video decoder, which doesn't
have access to MPContext either. Moving this call to mplayer.c isn't
currently possible either (see below). So we just do this before frames
are filtered, which potentially means setting the deinterlacing every
frame. Fortunately, setting deinterlacing is stable and idempotent, so
this is hopefully not a problem. We also add a counter that is
incremented on each reconfig to reduce the amount of additional work per
frame to nearly zero.
The reason we can't move mpcodecs_reconfig_vo() to mplayer.c is because
of hardware decoding: we need to check whether the video chain works
before we decide that we can use hardware decoding. Changing it so that
this can be decided in advance without building a filter chain sounds
like a good idea and should be done, but we aren't there yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").
Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)
Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.
Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)
Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.
Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.
Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Matroska has an output sample rate (OutputSamplingFrequency), which in
theory should be forced instead of whatever the decoder outputs. But it
appears no software (other than mplayer2 and mpv until now) actually
respects this. Even worse, there were broken files around, which played
correctly with (in theory) broken software, but not mplayer2/mpv. Hacks
were added to our code to play these files correctly, but they didn't
catch all cases.
Simplify this by doing what everyone else does, and always use the
decoder's sample rate instead. In particular, we try to handle all
sample rate issues like libavformat's Matroska demuxer does.
|
|
|
|
|
|
|
|
|
|
| |
Guess the colorspace directly in mpcodecs_reconfig_vo(), instead of in
set_video_colorspace(). The difference is that the latter function just
makes the video filter chain (and VOs) force the detected colorspace,
and then throws it away, while the former is a bit more general and
central. Not really a big difference and it doesn't matter much in
practice, but it guarantees that there is no internal disagreement about
the colorspace.
|
|
|
|
|
|
|
| |
Move codec_tags.h include to demux_mkv.c, because this is the only file
which still uses it.
Move new_sh_stream() to demux.h, because this is more proper.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this commit, we tried to play along with libavformat and tried
to pretend that attached pictures are video streams with a single
frame, and that the frame magically appeared at the seek position when
seeking. The playback core would then switch to a mode where the video
has ended, and the "remaining" audio is played.
This didn't work very well:
- we needed a hack in demux.c, because we tried to read more packets in
order to find the "next" video frame (libavformat doesn't tell us if
a stream has ended)
- switching the video stream didn't work, because we can't tell
libavformat to send the packet again
- seeking and resuming after was hacky (for some reason libavformat sets
the returned packet's PTS to that of the previously returned audio
packet in generic code not related to attached pictures, and this
happened to work)
- if the user did something stupid and e.g. inserted a deinterlacer by
default, a picture was never displayed, only an inactive VO window)
- same when using a command that reconfigured the VO (like switching
aspect or video filters)
- hr-seek didn't work
For this reason, handle attached pictures as separate case with a
separate video decoding function, which doesn't read packets. Also,
do not synchronize audio to video start in this case.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generally remove all accesses to demux_stream from all the code, except
inside of demux.c. Make it completely private to demux.c.
This simplifies the code because it removes an extra concept. In demux.c
it is reduced to a simple packet queue. There were other uses of
demux_stream, but they were removed or are removed with this commit.
Remove the extra "ds" argument to demux fill_buffer callback. It was
used by demux_avi and the TV pseudo-demuxer only.
Remove usage of d_video->last_pts from the no-correct-pts code. This
field contains the last PTS retrieved after a packet that is not NOPTS.
We can easily get this value manually because we read the packets
ourselves. Reuse sh_video->last_pts to store the packet PTS values. It
was used only by the correct-pts code before, and like d_video->last_pts,
it is reset on seek. The behavior should be exactly the same.
|
|
|
|
|
|
|
|
|
|
|
| |
This is not directly related to the handling of format changes itself,
but playing audio normally after the change. This was broken: the output
byte rate was not recalculated, so audio-video sync was simply broken.
Fix this by calculating the byte rate on the fly, instead of storing it
in sh_audio.
Format changes are relatively common (switches between stereo and 5.1
in TV recordings), so this fixes a somewhat critical bug.
|