| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
were not before.
Removed the instance LensGpgEncParams RemoteConfig because it encouraged
code that does not take the RemoteGitConfig into account.
RemoteType's setup was changed to take a RemoteGitConfig,
although the only place that is able to provide a non-empty one is
enableremote, when it's changing an existing remote. This led to several
folow-on changes, and got RemoteGitConfig plumbed through.
|
| |
|
|
|
|
|
|
| |
instead of the default DNS-style access.
untested
|
| |
|
|
|
|
|
| |
Including in addurl, and get --from web, but also in S3 and External
special remotes when a web url is known for content in those remotes.
|
|
|
|
|
|
| |
* Fix failure to build with aws-0.13.0.
* When built with aws-0.13.0, the S3 special remote can be used to create
google nearline buckets, by setting storageclass=NEARLINE.
|
|
|
|
| |
Was using the http-only Manager before, not the tls-capable one.
|
|
|
|
|
| |
Not implemented for any remotes yet; probably the git remote is the only
one that will ever implement it.
|
|
|
|
|
|
|
|
|
|
| |
Since I want git-annex to keep building on debian stable, I need to still
support the old http-client, which required explicit calls to
closeManager, or use of withManager to get Managers to close at appropriate
times. This is not needed in the new version, and so they added a
deprecation warning. IMHO much too early, because look at the mess I had to
go through to avoid that deprecation warning while supporting both
versions..
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Added support for storageclass=STANDARD_IA to use Amazon's
new Infrequently Accessed storage.
Also allows using storageclass=NEARLINE to use Google's NearLine storage.
The necessary changes to aws to support this are in
https://github.com/aristidb/aws/pull/176
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it suffices to run git remote add, followed by git-annex sync. Now the
remote is automatically initialized for use by git-annex, where before the
git-annex branch had to manually be pushed before using git-annex sync.
Note that this involved changes to git-annex-shell, so if the remote is
using an old version, the manual push is still needed.
Implementation required git-annex-shell be changed, so configlist can
autoinit a repository even when no git-annex branch has been pushed yet.
Unfortunate because we'll have to wait for it to get deployed to servers
before being able to rely on this change in the documentation.
Did consider making git-annex sync push the git-annex branch to repos that
didn't have a uuid, but this seemed difficult to do without complicating it
in messy ways.
It would be cleaner to split a command out from configlist to handle
the initialization. But this is difficult without sacrificing backwards
compatability, for users of old git-annex versions which would not use the
new command.
|
| |
|
|
|
|
|
|
| |
Note that it's possible for a S3 bucket to be configured to allow public
access, but for git-annex to not know that it is. I chose to not show the
url unless public=yes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
In my tests, this has to be set when uploading a file to the bucket
and then the file can be accessed using the bucketname.s3.amazonaws.com
url.
Setting it when creating the bucket didn't seem to make the whole bucket
public, or allow accessing files stored in it. But I have gone ahead and
also sent it when creating the bucket just in case that is needed in some
case.
|
|
|
|
| |
Split S3Info out of S3Handle and added some stubs
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
Command/Fsck.hs
Messages.hs
Remote/Directory.hs
Remote/Git.hs
Remote/Helper/Special.hs
Types/Remote.hs
debian/changelog
git-annex.cabal
|
| |
| |
| |
| | |
when using OverloadedStrings
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
cannot handle upper-case bucket names. git-annex now converts them to lower case automatically.
For example, it failed to get files from a bucket named S3.
Also fixes `git annex initremote UPPERCASE type=S3`, which failed with the
new aws library, with a signing error message.
|
| |
| |
| |
| | |
the bucket already exists.
|
| |
| |
| |
| | |
(endpoint, port, storage class)
|
| |
| |
| |
| | |
In particular, error should go to stderr
|
| |
| |
| |
| | |
To debug a bug report, but generally useful.
|
|/
|
|
| |
This needed plumbing an AssociatedFile through retrieveKeyFileCheap.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid using fileSize which maxes out at just 2 gb on Windows.
Instead, use hFileSize, which doesn't have a bounded size.
Fixes support for files > 2 gb on Windows.
Note that the InodeCache code only needs to compare a file size,
so it doesn't matter it the file size wraps. So it has been
left as-is. This was necessary both to avoid invalidating existing inode
caches, and because the code passed FileStatus around and would have become
more expensive if it called getFileSize.
This commit was sponsored by Christian Dietrich.
|
| |
|
| |
|
|
|
|
| |
This commit was sponsored by an anonymous bitcoiner.
|
|
|
|
| |
external special remote that handles magnet: and *.torrent urls.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This threw an unusual exception w/o an error message when probing to see if
the bucket exists yet. So rather than relying on tryS3, catch all
exceptions.
This does mean that it might get an exception for some transient network
error, think this means the bucket DNE yet, and try to create it, and then
fail when it already exists.
|
|
|
|
|
|
| |
This reverts commit 2ba5af49c94b97c586220c3553367988ef095934.
I misunderstood the cause of the problem.
|
|
|
|
|
|
|
|
|
| |
When uploading the last part of a file, which was 640229 bytes, S3 rejected
that part: "Your proposed upload is smaller than the minimum allowed size"
I don't know what the minimum is, but the fix is just to include the last
part into the previous part. Since this can result in a part that's
double-sized, use half-sized parts normally.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Unfortunately, I don't fully understand why it was leaking using the old
method of a lazy bytestring. I just know that it was leaking, despite
neither hGetUntilMetered nor byteStringPopper seeming to leak by
themselves.
The new method avoids the lazy bytestring, and simply reads chunks from the
handle and streams them out to the http socket.
|
| |
|
|
|
|
|
| |
Still seems to buffer the whole partsize in memory, but I'm pretty sure my
code is not what's doing it. See https://github.com/aristidb/aws/issues/142
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
May not work; if it does this is gonna be the simplest way to get good
memory size and progress reporting.
|
| |
|
| |
|
| |
|