Require consultancy on Bigdata / Spark / Kafka implementation. The project is related to handling 200-300 TB of data. Following are the expectations:
1. Architecture for scaling Upto this amount of data.
2. Implementation level HLD and LLD
3. Performance tuning
4. Dashboard and reports using this data.
5. Right technology selection from Hadoop ecosystem.
Candidates with past consultancy experience on handling such massive amount of data will be preferred.
Bu iş için 28 freelancer ortalamada $38/saat teklif veriyor
hey i have already deployed the cluster for many organisations and startups.. i can do this task for you while keeping all the performance parameter in the mind...
I have been working how Data Scientist for two years, I work with tools such as R, Python, DataBricks with Scala and python, Tableau, Power BI among others.
Ready to help in providing solution to Big data problem. 9 years of experience in designing and implementation of big data and cloud orientated solutions, such as away,azure,cloudera,spark,hive,pig,hbase ..