Api Gateway Lambda Upload File to S3
AWS S3 is 1 of the well-nigh used and popular AWS services today. Everyone needs a repository service for file storage, fill-in, disaster recovery, data archives, data lakes for analytics, and hybrid cloud storage. Storage needs are increasing every day and for this reason; edifice and maintaining your system becomes hard and circuitous. AWS S3 is the best fit for these needs because information technology provides scalability, high availability, low latency, and low toll. Also, its usage is very easy. It'southward possible to create our AWS S3 saucepan and upload any files that we want. It tin as well exist integrated with other AWS services such as AWS Lambda, AWS API Gateway, AWS CloudFront, etc.
While information technology becomes attractive for developers, it also becomes an exciting storage place for attackers. Information leaks are becoming extremely critical for companies if the necessary AWS S3 configurations are not made. At that place are some security points that need to be considered when you're creating your own AWS S3 bucket:
- E'er consider blocking public access first.
- Use AWS S3 encryption.
- Limit your IAM permissions to AWS S3 buckets. Ever follow at least the privilege principle.
- Enforce HTTPS to prevent attacks similar MITM (Man in the Middle).
- Enhance S3 security using logging.
In this blog postal service, we will talk about a real-world use case for sharing AWS S3 files securely. We volition provide the architecture and configurations for all steps that yous will need. Let's start!
Scenario:
A company wants to share invoice reports weekly. Here are the requirements:
- After a week, reports cannot exist reachable for anyone.
- Reports comprise critical data similar fiscal details, so they should non be accessible from a public URL.
- The process should be unproblematic as the reporters are not competent in coding.
- The company manager does non want to pay much for the process.
Solution: AWS S3 pre-signed URL generation for every upload!
Let's review the architecture together:
- Firstly, we demand to create a private AWS S3 bucket for uploading reports. Then we take to create a new API Gateway for uploading reports to the AWS S3 bucket with API requests. So we volition be able to upload reports with the API requests.
Our AWS API Gateway should expect similar below:
We need to get invoke URL from the created AWS API Gateway:
Note: Go on this invoke URL for the next steps. Y'all will put the object in the AWS S3 bucket.
Send the API asking to invoke URL for upload reports to AWS S3 (We're using Postman for this) :
API asking body:
Annotation: This API is attainable for anyone. Attackers dear unauthorized access. To forestall this, we need to exercise allow simply specific IP addresses access to our Amazon API Gateway. These IP addresses tin be VPN IPs of the visitor. You can utilise resource policies for this.
- We're checking the reports are uploaded to AWS S3 successfully.
- We have uploaded reports via API request to create an AWS S3 saucepan. At present we demand an AWS Lambda function that invokes uploaded reports. This function volition generate a pre-signed URL that will expire after 7 days and send this to employees that are divers on AWS SNS. Employees need to be subscribed to the AWS SNS topic before this performance.
Note: All S3 objects are private by default. They can only be accessed by the object owner. By creating a pre-signed URL, the object owner can share objects with others. To create a pre-signed URL, you lot should utilise your own credentials and grant fourth dimension-express permission to download the objects. The maximum expiration fourth dimension for a pre-signed URL is one week from the fourth dimension of creation and there is no fashion to have a pre-signed URL without decease time.
Yous can use the Lambda function code beneath:
import json import boto3 def lambda_handler(event, context): s3_bucket_name = str(consequence['Records'][0]['s3']['saucepan']['name']) s3_report_key = str(event['Records'][0]['s3']['object']['key']) s3_client = boto3.client('s3') report_presigned_url = s3_client.generate_presigned_url('get_object', Params={'Bucket': s3_bucket_name, 'Key': s3_report_key}, ExpiresIn=604800) MY_SNS_TOPIC_ARN = "{SNS_topic_ARN}" sns_client = boto3.client('sns') sns_client.publish( TopicArn=MY_SNS_TOPIC_ARN, Subject = 'Reports Presigned URLs', Message="Your Reports are gear up:\n%s" %report_presigned_url) print('success') return {'message': 'information technology works'}
We demand to adhere the required policies to the AWS Lambda. For exam purposes, full access policies are attached. If yous're using this code in product, yous need to create your own policies with at least a privilege principle.
- Now, nosotros should add the event notification that will trigger the Lambda when the study is uploaded to the s3 bucket. We demand to navigate "Amazon S3 → {bucket_name} → Backdrop → Event notifications" and create a new event notification:
We need to select the Lambda office's ARN that we created earlier:
- We should also bank check reports on the S3 saucepan and delete reports whose creation date is more than 7 days. For this, we will create a lambda role and make it trigger every 7 days from AWS CloudWatch. You tin can use the Lambda function code below:
import json import boto3 import time from datetime import datetime, timedelta def lambda_handler(consequence, context): old_files = [] s3 = boto3.client('s3') try: files = s3.list_objects_v2(Saucepan={bucket_name})['Contents'] old_files = [{'Key': file['Key']} for file in files if (file['LastModified'].strftime('%Y-%m-%d')) < ((datetime.now() - timedelta(days=seven)).strftime('%Y-%thou-%d'))] if old_files: s3.delete_objects(Saucepan={bucket_name}, Delete={'Objects': old_files}) return { 'statusCode': 200, 'body': "Expired Objects Deleted!" } except: return { 'statusCode': 400, 'torso': "No Expired Object to Delete!" }
When we create the CloudWatch Outcome Rule, we need to select the event source and targets.
- All processes are done and ready to use. Nosotros can upload reports with the API asking. Later on nosotros transport a request to our API successfully, we'll be able to become a pre-signed URL in an e-mail which volition await like this:
When we click on the pre-signed URL, we will be able to download the report we take uploaded with the API asking.
Also, the event rule nosotros created in CloudWatch will be checking our S3 bucket weekly and volition delete those that have been at that place for more than one week.
In this blog, we've summarized what AWS S3 is. We also prepared a existent-globe apply instance for sharing AWS S3 files securely. We provided the architecture and configurations for all steps that you need. We hope you enjoyed it!
Cheque out our Cloud Security services to stay secure in the cloud!
Source: https://www.prplbx.com/resources/blog/how-to-securely-share-aws-s3-files/
0 Response to "Api Gateway Lambda Upload File to S3"
Post a Comment