| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
| |
When quvi is installed, git-annex addurl automatically uses it to detect
when an page is a video, and downloads the video file.
web special remote: Also support using quvi, for getting files,
or checking if files exist in the web.
This commit was sponsored by Mark Hepburn. Thanks!
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a simple approach for setting up a mirroring repository.
It will work with any type of remotes.
Mirror --from is more expensive than mirror --to in general.
OTOH, mirror --from will get the file from any remote that has it, not only
the named mirror remote. And if the named mirror remote is not the fastest
available remote with a file, that can speed things up.
It would be possible to make the assistant or watch command do a more
dynamic mirroring, that didn't need to scan every time.
|
|
|
|
|
|
|
|
| |
Note that --deduplicate currently checksums each file twice,
once to see if it's a known key, and once when importing it.
Perhaps this could be revisited and the extra checksum gotten rid of,
at the cost of not locking down the file when adding it.
|
|\
| |
| |
| |
| | |
Conflicts:
debian/changelog
|
| | |
|
|/
|
|
|
| |
The other two options are harder, due to needing to get the key for a file
before adding it.
|
| |
|
|
|
|
| |
feed has repeatedly had a problems for at least 1 day.
|
| |
|
|
|
|
|
| |
The latter is harder for me to remember, but avoids build failures in code
used by the configure program.
|
|
|
|
| |
duplication
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
maximum filename length limit.
Started with a problem when running addurl on a really long url,
because the whole url is munged into the filename. Ended up doing
a fairly extensive review for places where filenames could get too large,
although it's hard to say I'm not missed any..
Backend.Url had a 128 character limit, which is fine when the limit is 255,
but not if it's a lot shorter on some systems. So check the pathconf()
limit. Note that this could result in fromUrl creating different keys
for the same url, if run on systems with different limits. I don't see
this is likely to cause any problems. That can already happen when using
addurl --fast, or if the content of an url changes.
Both Command.AddUrl and Backend.Url assumed that urls don't contain a
lot of multi-byte unicode, and would fail to truncate an url that did
properly.
A few places use a filename as the template to make a temp file.
While that's nice in that the temp file name can be easily related back to
the original filename, it could lead to `git annex add` failing to add a
filename that was at or close to the maximum length.
Note that in Command.Add.lockdown, the template is still derived from the
filename, just with enough space left to turn it into a temp file.
This is an important optimisation, because the assistant may lock down
a bunch of files all at once, and using the same template for all of them
would cause openTempFile to iterate through the same set of names,
looking for an unused temp file. I'm not very happy with the relatedTemplate
hack, but it avoids that slowdown.
Backend.WORM does not limit the filename stored in the key.
I have not tried to change that; so git annex add will fail on really long
filenames when using the WORM backend. It seems better to preserve the
invariant that a WORM key always contains the complete filename, since
the filename is the only unique material in the key, other than mtime and
size. Since nobody has complained about add failing (I think I saw it
once?) on WORM, probably it's ok, or nobody but me uses it.
There may be compatability problems if using git annex addurl --fast
or the WORM backend on a system with the 255 limit and then trying to use
that repo in a system with a smaller limit. I have not tried to deal with
those.
This commit was sponsored by Alexander Brem. Thanks!
|
| |
|
|
|
|
| |
Returned the possibly non-unique file
|
|
|
|
|
|
|
| |
When there's no extension, don't use "none", but "".
When there is an extension, it starts with a dot, so don't put a redundant
dot in the default format.
|
| |
|
|
|
|
|
|
| |
filesystem encoding to the rescue once more!
IIRC this was the main bug in hpodder.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
unless you use the --force.
This was the last place in git-annex that could remove data referred to by
the git history, without being forced.
Like drop, dropunused checks remotes, and honors the global annex.numcopies
setting. (However, .gitattributes settings cannot apply to unused files.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In direct mode, it's best to whenever possible not move direct mode files
out of the way, and so I made unannex avoid touching the direct mode file at
all.
That actually turns out to be easy, because in direct mode, unlike indirect
mode, the pre-commit hook won't get confused if the unannexed file later
gets added back by git add. So there's no need to commit the unannex right
away; it can be staged for the user to commit later. This also means that
unannex in direct mode is a lot faster than in indirect mode!
Another subtle bit is the bookkeeping that is done when unannexing a direct
mode file. The inode cache needs to be removed so that when uninit runs
getKeysPresent, it doesn't see the cache and think the key is still
present and crash when it's not.
This commit is sponsored by Douglas Butts. Thanks!
|
|
|
|
| |
the work tree
|
|
|
|
| |
that old versions of files and deleted files are not deleted. Print a message with some suggested actions.
|
| |
|
|
|
|
| |
not correspond to any unused key.
|
|
|
|
| |
the urls's path.
|
| |
|
|
|
|
| |
useful to put in a post-receive hook to make a repository automatically update its working copy when git annex sync or the assistant sync with it.
|
|
|
|
| |
last run of git annex unused. Supported by fsck, get, move, copy.
|
| |
|
| |
|
|
|
|
|
|
| |
A common failure mode for direct mode has been for files to end up still
stored in indirect mode. While I hope that doesn't happen anymore, fsck
should deal with it.
|
|
|
|
| |
On old systems, it may need to be run as root.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
It's causing some problem on windows, see
http://git-annex.branchable.com/bugs/windows_port_-_repo_can__39__t_pull_newly_added_files_/#comment-45df9748bba687d95e3c96b3877ea925
And only affected WORM backend, and for one release well over a year ago,
so could well be bitrotted.
|
|
|
|
|
| |
This is because people continually whine about it. Seemingly not aware
that data generally cannot be deleted from git anyway.
|
|
|
|
| |
branch on such a remote, instead of to synced/master. This makes it easier to clone from a bare git remote that has been populated with git annex sync or by the assistant.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
adding them.
This write permission frobbing is very appropriate in indirect mode,
since annexed objects are stored as immutably as can be managed. But not
in direct mode, where files should be able to be modified at any time.
There are already sufficient guards that there's no need to prevent a file
being written to while it's being ingested, in direct mode. The inode cache
will detect (most) types of modifications, and the add will fail. Then a
re-add should be done. The assistant should get another inotify change
event, and automatically add the new version of the file.
|
|
|
|
| |
when a new repo is made.
|
|
|
|
| |
do not support hard links, but do support symlinks and other POSIX filesystem features.
|
| |
|