ATS : Remote Storage Operations

Introduction


ATS provides support for operations over Amazon Simple Storage Service (S3 storage) and compatible services. You may use S3Operations for direct tasks. The current list of these tasks include:

  • bucket operations - check for existence, create and delete;
  • object operations:
    • upload, download files/objects;
    • list remote objects;
    • move (rename), delete objects;
    • get attributes which include size, modification time, checksum (MD5/ETag);

Below are the details on how to implement such operations.

Initialization


In order to use ATS S3Operations client, you must add this dependency into your classpath the ats-s3-utilities library. For Maven you should add this dependency in your pom.xml file:

    <dependency>
        <groupId>com.axway.ats.framework.utilities</groupId>
        <artifactId>ats-s3-utilities</artifactId>
        <version>${ats.version}</version> <!-- Refers property with exact ATS version like 4.0.6 -->
    </dependency>


Then you may create an instance of S3Operations client:

import com.axway.ats.action.s3.S3Operations; // part of ats-s3-utilities library
...
S3Operations s3OperationsClient = new S3Operations( endpoint, accessKey, secretKey, region, bucketName);

What are the parameters:

  • endpoint is the URL to the instance/server (URL with host and optional port like 12.34.56.78:5678) where S3 storage is set up and running. For Amazon S3 you may refer to this endpoints table. Example: s3.eu-west-1.amazonaws.com
  • access key part of credentials
  • secret key part of credentials
  • region name related to the endpoint property. Refer to documentation. Example: eu-west-1
  • bucket name the name of the bucket you want to work with. It might be not created yet. For naming conventions, you may refer to this document. Notice that bucket name should be unique and could not be reused between different customers in the same endpoint/region.

Examples


Below is an example of what you can do with this utility. For the full list please check the source code or JavaDoc.

import com.axway.ats.action.s3.S3Operations;
import com.axway.ats.action.s3.S3ObjectInfo;
...

// Make an instance of this class to work with bucket named "my-ats-example"
S3Operations s3Ops =  new S3Operations( "s3.eu-west-1.amazonaws.com", accessKey, secretKey, 
                                       "eu-west-1", "my-ats-example");

// use clean bucket
if (s3Ops.doesBucketExist()) {
   s3Ops.deleteBucket();
}
s3Ops.createBucket();

// upload file
s3Ops.upload("destFile.txt", "localFile.txt");

// compare checksums to make sure file is uploaded and not corrupted
String remoteMd5 = s3Ops.getFileMD5(destFile);
Assert.assertEquals(remoteMd5, localFileMd5 /* use FileSystemOperations for this */, 
                    "MD5 of uploaded file does not match expected " + localFileMd5); 

// single call to get many attributes
S3ObjectInfo summary = s3Ops.getFileMetadata("destFile.txt"); 
long fileSize = summary.getSize(); // in bytes
String remoteMd5 = summary.getMd5(); 
Date fileDate = summary.getLastModified(); // get last modification time of this object

// delete all objects in bucket. Effective even for large set of objects to be deleted. 
s3Ops.deleteObjects("" /* or null, no prefix path to be matched */, ".*" /* match all names */, true /* search recursively in nested paths */);

Note that Amazon emulates folders by accepting separator character. ATS considers it to be the default, UNIX folder separator ("/", forward slash). However, in order to root directory, the path prefix should be just an empty string.



Special considerations


Working with remote files is tricky and you should consider optional concurrency cases. This could be true especially if you use ATS to verify that other systems had already uploaded some file. In such case, it is recommended to use polling and particularly our RBV client named S3Verification. More details are described in Remote Storage (S3) Verifications.



Back to parent page

Go to Table of Contents