| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
With the format left untouched, this would just try to reinit with a
spdif format again.
We're not clearing the format in reset_audio_state() so the audio chain
can be recreated any time without having to wait for a frame to be
decoded.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
It doesn't need to be part of the big context, but is strictly part of
shuffling data from the audio filters to audio output, and thus belongs
into ao_chain.
It also turns out that clearing it in clear_audio_output_buffers() is
completely redundant.
(Of course ao_buffer is an abomination in the first place and shouldn't
exist at all.)
|
| |
|
|
|
|
|
|
|
|
|
| |
Similar to the video path. dec_audio.c now handles decoding only. It
also looks very similar to dec_video.c, and actually contains some of
the rewritten code from it. (A further goal might be unifying the
decoders, I guess.)
High potential for regressions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Seems useless.
This only helped in one case: one audio stream in the sample
av_find_best_stream_fails.ts had a AC3 packets which couldn't be
decoded, and for which avcodec_decode_audio4() returned 0 forever. In
this specific case, playback will now not start, and you have to
deselect audio manually.
(If someone complains, the old behavior might be restored, but
differently.)
Also remove the stale "bitrate" field.
|
|
|
|
| |
That's where its only use is.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Eventually we want the VO be driven by a A->V filter, so a decoder
doesn't even have to exist. Some features definitely require a decoder
though (like reporting the decoder in use, hardware decoding, etc.), so
for each thing which accessed d_video, it has to be redecided if and how
it can access decoder state.
At least the "framedrop" property slightly changes semantics: you can
now always set this property, even if no video is active.
Some untested changes in this commit, but our bio-based distributed
test suite has to take care of this.
|
|
|
|
|
|
|
|
|
|
| |
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
|
|
|
|
| |
This change helps avoiding conflict with talloc.h from libtalloc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is another attempt at making files with sparse video frames work
better.
The problem is that you generally can't know whether a jump in video
timestamps is just a (very) long video frame, or a timestamp reset. Due
to the existence of files with sparse video frames (new frame only every
few seconds or longer), every heuristic will be arbitrary (in general,
at least).
But we can use the fact that if video is continuous, audio should also
be continuous. Audio discontinuities can be easily detected, and if that
happens, reset some of the playback state.
The way the playback state is reset is rather radical (resets decoders
as well), but it's just better not to cause too much obscure stuff to
happen here. If the A/V sync code were to be rewritten, it should
probably strictly use PTS values (not this strange time_frame/delay
stuff), which would make it much easier to detect such situations and
to react to them.
|
| |
|
|
|
|
|
|
|
|
|
| |
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the audio format is not known yet and the audio chain is still
initializing, filter reinit will fail. Normally, attempts to
reinitialize filters at this stage should be rare (e.g. user commands
editing the filter chain). But it sometimes happened with track
switching in combination with the video code calling
update_playback_speed() at arbitrary times.
Get rid of the message by not trying to change the filters for the sake
of playback speed update while decoding is still being initialized.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Actually, it didn't really require that before (most work was avoided),
but some bits had to be run anyway. Separate the speed change into a
light-weight function, which merely updates already created filters, and
a heavy-weight one which messes with filter insertion.
This also happens to fix the case where the filters would "forget" the
current speed (force resampling, change speed, hit a volume control to
force af_volume insertion - it will reset speed and desync).
Since we now always run the light-weight function, remove the
af_scaletempo verbose message that is printed on speed setting. Other
than that, all setters are cheap.
|
|
|
|
|
|
| |
We still have a sample-based buffer between filters and audio outputs.
In order to avoid cutting frames into half (which can upset receivers),
we strictly need to align the boundaries on which we cut the audio.
|
|
|
|
| |
Commit 49d94853 worked only at the start of playback.
|
|
|
|
|
|
|
|
|
|
| |
Discontinuities (like toggling fullscreen) can cause multiple frames to
be dropped in succession, which sounds very weird. It's better to drop
some video frames instead to compensate for larger desyncs.
We roughly base it on the maximum allowed speed changes (audio change is
"additional" to the video change to account for deviations when playing
at max. video speed change).
|
|
|
|
| |
It's annoying.
|
|
|
|
|
|
| |
It's not needed, because the additional data is not appended, but is the
total size of the audio buffer. The maximum size is the static audio
drop size (or twice, if the audio is duplicated).
|
|
|
|
| |
Not very robust in the moment.
|
|
|
|
|
| |
This was done for symmetry with adjust_sync(). But mpctx->delay is
always 0 at this point, so prefer slightly simpler code.
|
|
|
|
|
| |
Pretty dumb (and doesn't handle pausing or other discontinuities), but
at least somewhat idiot-proof.
|
|
|
|
|
|
|
|
| |
The previous commit handled not falling back to normal decoding if the
AO was reloaded (I think...), and this tries to re-engage spdif pass-
through if it was previously falling back to normal decoding (e.g.
because it temporarily switched to an audio device incapable of
passthrough).
|
|
|
|
| |
Makes the spdif automagic work better on audio hotplugging.
|
|
|
|
|
|
|
|
|
| |
The manpage entry explains this.
(Maybe this option could be always enabled and removed. I don't quite
remember what valid use-cases there are for just disabling audio
entirely, other than that this is also needed for audio decoder init
failure.)
|
|
|
|
|
|
|
| |
This should avoid unnecessary sleeping when audio playback start resync
has finished and goes into the normal playback state.
This is tricky; see e.g. commit 402fe381.
|
|
|
|
|
|
|
|
|
|
| |
For video sync, we want separate playback speed controls for user-
requested speed and the "correction" speed for video timing. Further, we
use this separation to make sure only a resampler is inserted if
playback speed is only changed for video sync correction.
As of this commit, this is basically inactive code. It's just
preparation for the video sync code (the following commit).
|
|
|
|
|
|
|
|
|
|
| |
Commit c5818046 fixed one case of audio EOF handling, and caused a new
one. This time, the ao_buffer doesn't actually contain everyting that
should be played - because if --end is used, only a part of it is
played. Of course this is stupid, and it will be changed later. For now,
this smaller change fixes the bug.
Fixes #2189.
|
|
|
|
|
|
|
|
|
| |
time_frame is when the next video frame should be shown. It's normally
overwritten by the video timing code. This also says something about
"nosound mode" (--no-audio today), but at least these days we don't use
it at all if video is disabled.
Remove it; it likely has no function at all.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In paused mode, we never entered the audio EOF state. This shows e.g. in
--keep-open mode, which will not set the eof-reached property correctly.
Regression since commit c06cd1b9. This commit was the wrong fix. We need
to respect the buffer state, and pausing has nothing to do with this.
Fixes #2167.
|
|
|
|
|
|
|
|
|
|
|
| |
Replace all the check macros with function calls. Give them all the
same case and naming schema.
Drop af_fmt2bits(). Only af_fmt2bps() survives as af_fmt_to_bytes().
Introduce af_fmt_is_pcm(), and use it in situations that used
!AF_FORMAT_IS_SPECIAL. Nobody really knew what a "special" format
was. It simply meant "not PCM".
|
|
|
|
|
| |
We must be sure that every change comes with a notification. Otherwise,
some property changes could possibly be missed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This provides a new method for enabling spdif passthrough. The old
method via --ad (--ad=spdif:ac3 etc.) is deprecated. The deprecated
method will probably stop working at some point.
This also supports PCM fallback. One caveat is that it will lose at
least 1 audio packet in doing so. (I don't care enough to prevent this.)
(This is named after the old S/PDIF connector, because it uses the same
underlying technology as far as the higher level protoco is concerned.
Also, the user should be renamed that passthrough is backwards.)
|
|
|
|
|
|
|
| |
This makes no sense, because the format can't be converted anyway. It
just sets up the filter chain init code, which will vomit a bunch of
useless and confusing messages. So uninit and fail explicitly when this
happens.
|
|
|
|
|
|
|
|
| |
When starting in paused mode, no audio is written to the device at all,
because writing audio implicitly unpauses the AO. If the file is very
small, and all audio fits within the AO buffer, this accidentally
triggered the EOF condition. (In unpaused mode, it would write all
audio, end playback, and then wait until the AO has everything played.)
|
|
|
|
|
|
|
| |
This was a cosmetic issue. It's handled differently now (clamping the
display time to known duration range).
This reverts commit 33b57f55573e658b3af6c6e8ff3188c8f959e82e.
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 10915000 attempted to fix wasting CPU when resyncing and no new
data was actually coming from the demuxer. The fix assumed that at this
point it would have reached the sync point, but since the code attempts
weird incremental decoding, this wasn't actually true. So it broke
seeking in addition to removing the CPU waste.
Try something else. This time, we essentially only wakeup again if
data was read (i.e. audio_decode() returned successfully).
|
|
|
|
|
|
|
|
|
| |
Thsi code path happens during seeking. If video is still being decoded
to get to the first video frame, audio has nothing to do, as it is
synchronized against the first video frame. We only want to wake up if
there's an actual state change.
Fixes #1958.
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
| |
The af_add() function has a problem: if the inserted filter returns
AF_DETACH during init, the function will have a dangling pointer. Until
now this was avoided by making sure none of the used filters actually
return AF_DETACH, but it's getting infeasible.
Solve this by requiring passing an unique label to af_add(), which is
then used instead of the pointer.
|
|
|
|
|
|
| |
Only reinit filters if it's actually needed. This is also slightly
easier to understand: if you look at the code, it should now be more
obvious why a reinit is needed (hopefully).
|
|
|
|
|
|
|
|
|
|
|
| |
Precise seeking requires skipping audio, since the demuxer usually
doesn't seek precisely enough. There is a sanity check that prevents
skipping more than 300 seconds of audio. This still fails with very
large mp3s. For example, with a 1GB sized mp3 with Xing headers, entries
will be 4 MB apart on average, and occasionally much more.
Just bump the limit. I'm not even sure why it was added in the first
place; I suppose it's most important for files with real PTS resets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When playback is started after seeking or opening a file, we need to
make sure audio and video line up exactly. This is done by cutting or
padding the audio stream to start on the video PTS.
This does not quite work with spdif: audio is compressed data, within a
spdif frame. There is no way to cut the audio "in between" the frames.
Cutting between the frames would just produce broken spdif packets, and
who knows how receivers will react to this (play noise?). But we still
can cut it in frame boundaries.
Unfortunately, we also insert 0 data for "silence" - we probably
shouldn't do this. Chances are the receiver will switch to PCM or so.
But for now this will have to do.
Note that this could be simplified somewhat, as soon as we work with
frames. See previous commit.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handle the failure gracefully, instead of exploding and disabling audio.
Just set the speed back to 1.0.
Also remove the AF_DETACH from af_scaletempo. This actually created a
dangling pointer in af_add(), a tricky consequence of af_add()
reconfiguring the filter chain and the newly added filter using
AF_DETACH. Fortunately the AF_DETACH is not needed (and probably never
worked - it comes from MPlayer times, and MPlayer also disables audio
when trying to change speed with spdif).
|
|
|
|
|
|
|
|
|
|
|
| |
Always use af_scaletempo if it's inserted, even if the option
--audio-pitch-correction=no is set.
Make sure all filters are reset on speed change. It's conceivable that
dynamic changes to the filter chain at runtime leave filters around
without resetting their speed parameters.
Also move the code to a separate function.
|
|
|
|
|
|
|
|
|
| |
If the audio decoder was created, but no audio filter chain created yet
(still trying to decode a first audio frame), setting the "speed"
property could explode. It tried to recreate the filter chain, even
though no format was set yet.
This is inconvenient and should not happen.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although the libraries we use for resampling (libavresample and
libswresample) do not support changing sampelrate on the fly, this makes
it easier to make sure no audio buffers are implicitly dropped. In fact,
this commit adds additional code to drain the resampler explicitly.
Changing speed twice without feeding audio in-between made it crash
with libavresample inc ertain cases (libswresample is fine). This is
probably a libavresample bug. Hopefully this will be fixed, and also I
attempted to workaround the situation that crashes it. (It seems to
point in direction of random memory corruption, though.)
|