| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the audio format is not known yet and the audio chain is still
initializing, filter reinit will fail. Normally, attempts to
reinitialize filters at this stage should be rare (e.g. user commands
editing the filter chain). But it sometimes happened with track
switching in combination with the video code calling
update_playback_speed() at arbitrary times.
Get rid of the message by not trying to change the filters for the sake
of playback speed update while decoding is still being initialized.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Actually, it didn't really require that before (most work was avoided),
but some bits had to be run anyway. Separate the speed change into a
light-weight function, which merely updates already created filters, and
a heavy-weight one which messes with filter insertion.
This also happens to fix the case where the filters would "forget" the
current speed (force resampling, change speed, hit a volume control to
force af_volume insertion - it will reset speed and desync).
Since we now always run the light-weight function, remove the
af_scaletempo verbose message that is printed on speed setting. Other
than that, all setters are cheap.
|
|
|
|
|
|
| |
We still have a sample-based buffer between filters and audio outputs.
In order to avoid cutting frames into half (which can upset receivers),
we strictly need to align the boundaries on which we cut the audio.
|
|
|
|
| |
Commit 49d94853 worked only at the start of playback.
|
|
|
|
|
|
|
|
|
|
| |
Discontinuities (like toggling fullscreen) can cause multiple frames to
be dropped in succession, which sounds very weird. It's better to drop
some video frames instead to compensate for larger desyncs.
We roughly base it on the maximum allowed speed changes (audio change is
"additional" to the video change to account for deviations when playing
at max. video speed change).
|
|
|
|
| |
It's annoying.
|
|
|
|
|
|
| |
It's not needed, because the additional data is not appended, but is the
total size of the audio buffer. The maximum size is the static audio
drop size (or twice, if the audio is duplicated).
|
|
|
|
| |
Not very robust in the moment.
|
|
|
|
|
| |
This was done for symmetry with adjust_sync(). But mpctx->delay is
always 0 at this point, so prefer slightly simpler code.
|
|
|
|
|
| |
Pretty dumb (and doesn't handle pausing or other discontinuities), but
at least somewhat idiot-proof.
|
|
|
|
|
|
|
|
| |
The previous commit handled not falling back to normal decoding if the
AO was reloaded (I think...), and this tries to re-engage spdif pass-
through if it was previously falling back to normal decoding (e.g.
because it temporarily switched to an audio device incapable of
passthrough).
|
|
|
|
| |
Makes the spdif automagic work better on audio hotplugging.
|
|
|
|
|
|
|
|
|
| |
The manpage entry explains this.
(Maybe this option could be always enabled and removed. I don't quite
remember what valid use-cases there are for just disabling audio
entirely, other than that this is also needed for audio decoder init
failure.)
|
|
|
|
|
|
|
| |
This should avoid unnecessary sleeping when audio playback start resync
has finished and goes into the normal playback state.
This is tricky; see e.g. commit 402fe381.
|
|
|
|
|
|
|
|
|
|
| |
For video sync, we want separate playback speed controls for user-
requested speed and the "correction" speed for video timing. Further, we
use this separation to make sure only a resampler is inserted if
playback speed is only changed for video sync correction.
As of this commit, this is basically inactive code. It's just
preparation for the video sync code (the following commit).
|
|
|
|
|
|
|
|
|
|
| |
Commit c5818046 fixed one case of audio EOF handling, and caused a new
one. This time, the ao_buffer doesn't actually contain everyting that
should be played - because if --end is used, only a part of it is
played. Of course this is stupid, and it will be changed later. For now,
this smaller change fixes the bug.
Fixes #2189.
|
|
|
|
|
|
|
|
|
| |
time_frame is when the next video frame should be shown. It's normally
overwritten by the video timing code. This also says something about
"nosound mode" (--no-audio today), but at least these days we don't use
it at all if video is disabled.
Remove it; it likely has no function at all.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In paused mode, we never entered the audio EOF state. This shows e.g. in
--keep-open mode, which will not set the eof-reached property correctly.
Regression since commit c06cd1b9. This commit was the wrong fix. We need
to respect the buffer state, and pausing has nothing to do with this.
Fixes #2167.
|
|
|
|
|
|
|
|
|
|
|
| |
Replace all the check macros with function calls. Give them all the
same case and naming schema.
Drop af_fmt2bits(). Only af_fmt2bps() survives as af_fmt_to_bytes().
Introduce af_fmt_is_pcm(), and use it in situations that used
!AF_FORMAT_IS_SPECIAL. Nobody really knew what a "special" format
was. It simply meant "not PCM".
|
|
|
|
|
| |
We must be sure that every change comes with a notification. Otherwise,
some property changes could possibly be missed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This provides a new method for enabling spdif passthrough. The old
method via --ad (--ad=spdif:ac3 etc.) is deprecated. The deprecated
method will probably stop working at some point.
This also supports PCM fallback. One caveat is that it will lose at
least 1 audio packet in doing so. (I don't care enough to prevent this.)
(This is named after the old S/PDIF connector, because it uses the same
underlying technology as far as the higher level protoco is concerned.
Also, the user should be renamed that passthrough is backwards.)
|
|
|
|
|
|
|
| |
This makes no sense, because the format can't be converted anyway. It
just sets up the filter chain init code, which will vomit a bunch of
useless and confusing messages. So uninit and fail explicitly when this
happens.
|
|
|
|
|
|
|
|
| |
When starting in paused mode, no audio is written to the device at all,
because writing audio implicitly unpauses the AO. If the file is very
small, and all audio fits within the AO buffer, this accidentally
triggered the EOF condition. (In unpaused mode, it would write all
audio, end playback, and then wait until the AO has everything played.)
|
|
|
|
|
|
|
| |
This was a cosmetic issue. It's handled differently now (clamping the
display time to known duration range).
This reverts commit 33b57f55573e658b3af6c6e8ff3188c8f959e82e.
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 10915000 attempted to fix wasting CPU when resyncing and no new
data was actually coming from the demuxer. The fix assumed that at this
point it would have reached the sync point, but since the code attempts
weird incremental decoding, this wasn't actually true. So it broke
seeking in addition to removing the CPU waste.
Try something else. This time, we essentially only wakeup again if
data was read (i.e. audio_decode() returned successfully).
|
|
|
|
|
|
|
|
|
| |
Thsi code path happens during seeking. If video is still being decoded
to get to the first video frame, audio has nothing to do, as it is
synchronized against the first video frame. We only want to wake up if
there's an actual state change.
Fixes #1958.
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
| |
The af_add() function has a problem: if the inserted filter returns
AF_DETACH during init, the function will have a dangling pointer. Until
now this was avoided by making sure none of the used filters actually
return AF_DETACH, but it's getting infeasible.
Solve this by requiring passing an unique label to af_add(), which is
then used instead of the pointer.
|
|
|
|
|
|
| |
Only reinit filters if it's actually needed. This is also slightly
easier to understand: if you look at the code, it should now be more
obvious why a reinit is needed (hopefully).
|
|
|
|
|
|
|
|
|
|
|
| |
Precise seeking requires skipping audio, since the demuxer usually
doesn't seek precisely enough. There is a sanity check that prevents
skipping more than 300 seconds of audio. This still fails with very
large mp3s. For example, with a 1GB sized mp3 with Xing headers, entries
will be 4 MB apart on average, and occasionally much more.
Just bump the limit. I'm not even sure why it was added in the first
place; I suppose it's most important for files with real PTS resets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When playback is started after seeking or opening a file, we need to
make sure audio and video line up exactly. This is done by cutting or
padding the audio stream to start on the video PTS.
This does not quite work with spdif: audio is compressed data, within a
spdif frame. There is no way to cut the audio "in between" the frames.
Cutting between the frames would just produce broken spdif packets, and
who knows how receivers will react to this (play noise?). But we still
can cut it in frame boundaries.
Unfortunately, we also insert 0 data for "silence" - we probably
shouldn't do this. Chances are the receiver will switch to PCM or so.
But for now this will have to do.
Note that this could be simplified somewhat, as soon as we work with
frames. See previous commit.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handle the failure gracefully, instead of exploding and disabling audio.
Just set the speed back to 1.0.
Also remove the AF_DETACH from af_scaletempo. This actually created a
dangling pointer in af_add(), a tricky consequence of af_add()
reconfiguring the filter chain and the newly added filter using
AF_DETACH. Fortunately the AF_DETACH is not needed (and probably never
worked - it comes from MPlayer times, and MPlayer also disables audio
when trying to change speed with spdif).
|
|
|
|
|
|
|
|
|
|
|
| |
Always use af_scaletempo if it's inserted, even if the option
--audio-pitch-correction=no is set.
Make sure all filters are reset on speed change. It's conceivable that
dynamic changes to the filter chain at runtime leave filters around
without resetting their speed parameters.
Also move the code to a separate function.
|
|
|
|
|
|
|
|
|
| |
If the audio decoder was created, but no audio filter chain created yet
(still trying to decode a first audio frame), setting the "speed"
property could explode. It tried to recreate the filter chain, even
though no format was set yet.
This is inconvenient and should not happen.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although the libraries we use for resampling (libavresample and
libswresample) do not support changing sampelrate on the fly, this makes
it easier to make sure no audio buffers are implicitly dropped. In fact,
this commit adds additional code to drain the resampler explicitly.
Changing speed twice without feeding audio in-between made it crash
with libavresample inc ertain cases (libswresample is fine). This is
probably a libavresample bug. Hopefully this will be fixed, and also I
attempted to workaround the situation that crashes it. (It seems to
point in direction of random memory corruption, though.)
|
|
|
|
|
|
| |
In my opinion the artifacts created by af_scaletempo on extreme slowdown
(50% or so) are too bothersome - but users disagree. So use
af_scaletempo on any speed changes, not just on speedup.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids potentially dropping some small amount of audio data
buffered in filters.
Reinit can be skipped only if the filter is af_scaletempo (which maps to
AF_CONTROL_SET_PLAYBACK_SPEED). The other case using af_lavrresample is
much more complicated due to filter chain politics.
Also, changing speed between 1.0 and something higher typically inserts
or removes the filter, so this obviously requires reinitialization. It
can be prevented by forcing the filter with --af=scaletempo.
|
|
|
|
|
|
| |
I guess this was supposed to be some sort of optimization, but even
though it probably works, it's pretty meaningless and I couldn't measure
a difference. One special case killed.
|
|
|
|
| |
Just minor things.
|
|
|
|
|
|
| |
mpctx->audio_delay always has the same value as opts->audio_delay. (This
was not the case a long time ago, when the audio-delay property didn't
actually write to opts->audio_delay. I think.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some files can have audio after video has ended, and playback of the
audio-only remainder is supposed to work just fine.
Seeking is broken-ish though. Not much can be done about this, since
it's the way demuxers work. Also, such files are obscure corner cases.
But enabling hr-seek for audio after video end can improve the situation
a lot.
This helps with issue #1533. The reported also provided a command line
to produce such a file:
ffmpeg -i image.jpg -i audio.flac -threads $(nproc) \
-c:v libvpx -crf 10 -qmin 5 -qmax 55 \
-vf scale=360:-1 -sws_flags lanczos -c:a libvorbis -ac 2 \
-b:a 128K out.webm
|
|
|
|
|
|
| |
The existing code only ignored --audio-channels, but not --audio-rate or
--audio-format if spdif passthrough is used. Setting these makes no
sense.
|
|
|
|
|
|
| |
This makes it retry later.
Fixes #1474.
|
|
|
|
|
|
|
| |
This was forgotten when the option was implemented, and makes this
option work as advertised.
Fixes #1473 (though the default behavior is probably still stupid).
|
|
|
|
|
|
|
| |
This is a somewhat obscure situation, and happens only if audio starts
again after it has ended (in particular can happens with files where
audio starts later). It doesn't matter much whether audio starts
immediately or some milliseconds later, so simplify it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When playing paused, the amount of decoded audio is limited to a small
amount (1 sample), because we don't write any audio to the AO when
paused. The small amount could trigger the case of the wanted audio
being too far in the future in the PTS sync code, which set the audio
status to STATUS_DRAINING, which in turn triggered the EOF code in the
next iteration. This was ok, but unfortunately, this triggered another
retry in order to check resuming from EOF by setting the status to
STATUS_SYNCING, which in turn lead to the busy loop by alternating
between the 2 states. So don't try resyncing while paused.
Since the PTS syncing code also calls ao_reset(), this could cause the
pulseaudio daemon to consume some CPU time as well.
This was caused by commit 33b57f55. Before that, the playloop was merely
run more often, but didn't cause any problems.
Fixes #1288.
|
|
|
|
| |
Simpler overall.
|
|
|
|
|
|
|
| |
We absolutely need to clear the AO reference in the mixer.
The audio_status must be changed to a state where no code assumes that
the AO is available. (It's allowed to do this blindly.)
|