| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
This only makes sense for public repos, that are not chunked, so
that there's a 1:1 from Key in the git-annex repo to file on the remote.
Rather than making every remote implementation deal with that, just disable
whereisKey when it doesn't make sense.
|
|
|
|
| |
This is needed when external special remotes register an url for a key.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
where git-annex downloads content from the remote using regular http.
Note that, if an url is added to the web log for such a remote, it's not
distinguishable from another url that might be added for the web remote.
(Because the web log doesn't distinguish which remote owns a plain url.
Urls with a downloader set are distinguishable, but we're not using them
here.)
This seems ok-ish.. In such a case, both remotes will try to use both
urls, and both remotes should be able to.
The only issue I see is that dropping a file from the web remote will
remove both urls in this case. This is not often done, and could even
be considered a feature, I suppose.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Note that I had one in Annex.Action.startup too, but it resulted in a weird
message printed by ssh, "channel 2: bad ext data". I don't know why, but
it only happened when transferinfo was run, so I wonder
if 1d71ad072e13c8ed1cb8b34367b57d59e651f0a9 introduced a fragility somehow.
|
| |
|
|
|
|
| |
overhead 6x.
|
| |
|
|
|
|
|
|
|
|
| |
using the cryptonite library.
While cryptohash has SHA3 support, it has not been updated for the final
version of the spec. Note that cryptonite has not been ported to all arches
that cryptohash builds on yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it suffices to run git remote add, followed by git-annex sync. Now the
remote is automatically initialized for use by git-annex, where before the
git-annex branch had to manually be pushed before using git-annex sync.
Note that this involved changes to git-annex-shell, so if the remote is
using an old version, the manual push is still needed.
Implementation required git-annex-shell be changed, so configlist can
autoinit a repository even when no git-annex branch has been pushed yet.
Unfortunate because we'll have to wait for it to get deployed to servers
before being able to rely on this change in the documentation.
Did consider making git-annex sync push the git-annex branch to repos that
didn't have a uuid, but this seemed difficult to do without complicating it
in messy ways.
It would be cleaner to split a command out from configlist to handle
the initialization. But this is difficult without sacrificing backwards
compatability, for users of old git-annex versions which would not use the
new command.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
resuming after the last successfully uploaded chunk.
"checkPresent baser" was wrong; the baser has a dummy checkPresent action
not the real one. So, to fix this, we need to call preparecheckpresent to
get a checkpresent action that can be used to check if chunks are present.
Note that, for remotes like S3, this means that the preparer is run,
which opens a S3 handle, that will be used for each checkpresent of a
chunk. That's a good thing; if we're resuming an upload that's already many
chunks in, it'll reuse that same http connection for each chunk it checks.
Still, it's not a perfectly ideal thing, since this is a different http
connection that the one that will be used to upload chunks. It would be
nice to improve the API so that both use the same http connection.
|
| |
|
|
|
|
| |
versions of tahoe create-client choking.
|
|
|
|
|
|
| |
Note that it's possible for a S3 bucket to be configured to allow public
access, but for git-annex to not know that it is. I chose to not show the
url unless public=yes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
In my tests, this has to be set when uploading a file to the bucket
and then the file can be accessed using the bucketname.s3.amazonaws.com
url.
Setting it when creating the bucket didn't seem to make the whole bucket
public, or allow accessing files stored in it. But I have gone ahead and
also sent it when creating the bucket just in case that is needed in some
case.
|
|
|
|
| |
Split S3Info out of S3Handle and added some stubs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes a bit of complexity, and should make things faster
(avoids tokenizing Params string), and probably involve less garbage
collection.
In a few places, it was useful to use Params to avoid needing a list,
but that is easily avoided.
Problems noticed while doing this conversion:
* Some uses of Params "oneword" which was entirely unnecessary
overhead.
* A few places that built up a list of parameters with ++
and then used Params to split it!
Test suite passes.
|
|
|
|
|
|
|
|
| |
generate URL keys.
This is especially useful because the caller doesn't need to generate valid
url keys, which involves some escaping of characters, and may involve
taking a md5sum of the url if it's too long.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The one exception is in Utility.Daemon. As long as a process only
daemonizes once, which seems reasonable, and as long as it avoids calling
checkDaemon once it's already running as a daemon, the fcntl locking
gotchas won't be a problem there.
Annex.LockFile has it's own separate lock pool layer, which has been
renamed to LockCache. This is a persistent cache of locks that persist
until closed.
This is not quite done; lockContent stil needs to be converted.
|
|
|
|
|
|
|
|
|
|
|
|
| |
used.
Only the assistant uses these, and only the assistant cleans them up, so
make only git annex transferkeys write them,
There is one behavior change from this. If glacier is being used, and a
manual git annex get --from glacier fails because the file isn't available
yet, the assistant will no longer later see that failed transfer file and
retry the get. Hope no-one depended on that old behavior.
|
|
|
|
| |
annex.diskreserve.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
Command/Fsck.hs
Messages.hs
Remote/Directory.hs
Remote/Git.hs
Remote/Helper/Special.hs
Types/Remote.hs
debian/changelog
git-annex.cabal
|
| |
| |
| |
| | |
when using OverloadedStrings
|
| |
| |
| |
| |
| |
| | |
I've tested all the dataenc to sandi conversions except Assistant.XMPP,
and all have unchanged behavior, including behavior on large unicode code
points.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
cannot handle upper-case bucket names. git-annex now converts them to lower case automatically.
For example, it failed to get files from a bucket named S3.
Also fixes `git annex initremote UPPERCASE type=S3`, which failed with the
new aws library, with a signing error message.
|
| | |
|
| |
| |
| |
| | |
the bucket already exists.
|
| |
| |
| |
| | |
(endpoint, port, storage class)
|
| |
| |
| |
| | |
In particular, error should go to stderr
|
| |
| |
| |
| | |
To debug a bug report, but generally useful.
|
| |
| |
| |
| | |
It's a code smell, can lead to hard to diagnose error messages.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
special remote. This was a reversion caused by the relative path changes in 5.20150113.
The directory special remote was not affected in its normal configuration,
since annex-directory is an absolute path normally. But it could fail
when a relative path was used.
The git remote was affected even when an absolute path to it was used in
.git/config, since git-annex now converts all such paths to relative.
|
| |
| |
| |
| | |
This needed plumbing an AssociatedFile through retrieveKeyFileCheap.
|
| | |
|
|\|
| |
| |
| |
| | |
Conflicts:
debian/changelog
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
| |
(eep!)
It sounds worse than it is. ;)
Some external special remotes may run commands that display progress on
stderr. If git-annex is run with --quiet, this should filter out such
displays while letting the errors through.
|
|
|
|
|
|
|
| |
Came up with a generic way to filter out progress messages while keeping
errors, for commands that use stderr for both.
--json mode will disable command outputs too.
|
|
|
|
|
| |
Otherwise, progress displays would not be suppressed here when running with
--quiet. Interesting wrinkle!
|
| |
|
| |
|
|
|
|
|
|
|
| |
from logged url info before checking for the specified prefix.
This doesn't change what GETURLS returns, but only whether it matches
any prefix that the external special remote asked for.
|