Lumerical Products support loading project files from and results to S3. S3 requires that files be organized into buckets. You will need to create a bucket and take note of the name so you can upload files to it later.
Amazon S3 Pricing is calculated with your bucket type (this guide uses Standard), with the amount of data you store in a month, and how frequently you access the data. For example, it will roughly cost $2.25 USD to store 100 GB of data for a month. To estimate how much S3 may cost for your use-case, use the AWS Cost Calculator.
Create an S3 bucket to transfer your files
Buckets live in AWS regions. For best performance pick the same region where you are running your jobs.
Block all public access to this bucket for a higher level of security. Make sure you enable S3 access when creating your IAM Role and/or AWS CLI.
Uploading files to your S3 bucket
To upload/download files to your S3 bucket using the AWS Console, follow these instructions: How Do I Upload Files and Folders to an S3 Bucket?
Running jobs in your S3 bucket
- Create a role for your compute instanses that allows read/write access to S3
- When launching an instance from an AMI, or a launch template, ensure that you assign the role to your instances:
- When the instance launches the AWS CLI will be configured, and Lumerical Products will detect the configurations. You can now pass file paths to the solver engines the same as any file path:
S3 Lumerical script commands
Change directory from within S3.
Copy files between the local machine and S3 and/or within S3.
cp('c:\folder_name\filename.fsp, s3://bucketname/foldername/filename.fsp ');
List the specified S3 directory.
Remove or delete the specified file from S3.
Load simulation file from S3 as current project.
Save current project or simulation into S3.
Run the currently loaded project.
Run all parameter sweeps or all optimization tasks from the current project.
Loads previously generated sweep results to all the sweep objects in the current project.