1) Peform ETL S3 ( HDFS file) as source and Redshift as target
Technical Environment : EC2,EMR, Pyspark & Hive
2) Peform ETL across S3, AWS noSQL database using Glue
3) One sample Python AWS Lambda function to run AWS Redshift SQL scripts
4) One sample on realtime streaming ( preferably Kafka( producer) + Pyspark( Consumer)
5) Executing bash commands ( jobs ) via Airflow ( or AWS equivalent)
Need walkthrough of above at urgent basis.
Even if you can solve some points , that is fine.
Bu iş için 11 freelancer ortalamada $14/saat teklif veriyor
hello- I am a 15year experienced professional having hands on experience working with AWS. I am interested to work on your task. lets discuss.
I am certified aws data engineer in aws services- Aws lambda Aws glue Aws step function Aws sagemaker Aws lake formation Aws athena Aws redahift Aws dynomo db programing languages- python, pyspark, sql I can surely h Daha Fazla
I have good experience of over 2 years in designing and developing ETL solutions using AWS services like DMS, Glue, Lambda, Athena and Redshift.
Hello , I am professional AWS cloud and PYTHON BigData Developer with an experience of 6 plus years. I have so far worked on multiple AWS services and have a good hands-on as well as architectural design knowledge on A Daha Fazla
I am BigData engineer with vast experience in various tools like spark(pyspark), Hadoop, yarn, Hive, kafka and Python, shell scripting. Have good experience in designing and implementing ETL pipelines using pyspark wit Daha Fazla