Filtre

Son aramalarım
Şuna göre filtrele:
Bütçe
ile
ile
ile
Tür
Beceri
Diller
    İş Durumu
    926 pyspark iş bulundu

    I'm seeking an expert in data analysis using PySpark on AWS. The primary goal is to analyze a large amount of structured data. Key Responsibilities: - Analyze the provided structured data and generate outputs in the given format. - Build classification machine learning models based on the insights from the data. - Utilize PySpark on AWS for data processing and analysis. Ideal Skills: - Proficiency in PySpark and AWS. - Strong experience in analyzing large datasets. - Expertise in building classification machine learning models. - Ability to generate outputs in a specified format.

    €113 Average bid
    €113 Ortalama Teklif
    4 teklifler

    I am seeking a seasoned data scientist and PySpark expert to develop a logistic regression model from scratch for text data classification using public datasets. Key Requirements: - Build a logistic regression model from scratch(do not use libraries for regression) to classify text data into categories. - Use of Python and PySpark is a must. - Experience with handling and analyzing text data is essential. The model's primary goal will be to classify the data into categories. The successful freelancer will be provided with detailed specifications and project requirements upon awarding. Please, only apply if you have substantial experience in creating logistic regression models and are comfortable working with text data and public datasets.

    €138 Average bid
    €138 Ortalama Teklif
    18 teklifler

    I am seeking expert-level training in the following technologies, from basics to advanced concepts: Power BI Azure Cloud Services Microsoft Fabric Azure Synapse Analytics SQL Python PySpark The goal is to gain comprehensive knowledge and hands-on experience with these tools, focusing on their practical application in data engineering. If you or your organization provide in-depth training programs covering these tech stacks, please reach out with course details, duration, and pricing. Looking forward to hearing from experienced professionals!

    €65 Average bid
    €65 Ortalama Teklif
    12 teklifler

    ...handling, grouping, sorting, and imputation of data, as well as implementation of advanced data bucketing strategies. The project also requires robust error-handling mechanisms, including the ability to track progress and resume operations after a crash or interruption without duplicating previously processed data. Requirements: Expertise in Python, especially libraries like Pandas, Dask, or PySpark for parallel processing. Experience with time-series data processing and geospatial data. Proficiency in working with large datasets (several gigabytes to terabytes). Knowledge of efficient I/O operations with CSV/Parquet formats. Experience with error recovery and progress tracking in data pipelines. Ability to write clean, optimized, and scalable code. Please provide examples of ...

    €16 - €136
    Mühürlü
    €16 - €136
    8 teklifler

    I'm in need of a professional with extensive PySpark unit testing experience. I have PySpark code that loads data from Oracle ODS to ECS (S3 bucket). The goal is to write unit test cases that will achieve at least 80% coverage in SonarQube quality. You will focus primarily on testing for: - Data validation and integrity - Error handling and exceptions The ideal candidate should: - Be proficient in using PyTest, as this is our preferred testing framework - Have a comprehensive understanding of PySpark - Be able to deliver immediately Please note, the main focus of this project is not on the data transformations that the PySpark code performs (which includes data cleaning and filtering, data aggregation and summarization, as well as data joining and mergin...

    €91 Average bid
    €91 Ortalama Teklif
    11 teklifler

    I'm seeking a professional with extensive experience in PySpark and ETL processes. The project involves migrating my current ETL job, which sources data from a PySpark database and targets a Data Lake. Key tasks include: - Designing and implementing the necessary PySpark code - Ensuring data is effectively transformed through cleaning, validation, aggregation, summarization, merging and joining. Ideal candidates will have a deep understanding of both PySpark and data transformations. Your expertise will be crucial to successfully migrate this ETL job.

    €36 Average bid
    €36 Ortalama Teklif
    11 teklifler

    I need an experienced Azure, PySpark, and Databricks developer to help me create a real-time analytics Kafka Streaming WordCount program using PySpark. Key Requirements: - Modify and adapt my existing PySpark streaming code (which works in my Jupyter notebook) to work with Kafka. - Ensure the program can consume data from file storage via Kafka streams. - Handle input data in CSV format from file storage. Ideal Skills and Experience: - Extensive experience with Azure, PySpark, and Databricks. - Proven track record of creating real-time analytics programs. - Familiarity with Kafka and its streaming capabilities. - Proficient in handling CSV data and modifying PySpark code.

    €9 / hr Average bid
    €9 / hr Ortalama Teklif
    7 teklifler

    I'm seeking a skilled PySpark expert to assist with data analysis and transformations on structured data. The task involves: - Utilizing PySpark to manipulate and analyze big data. - Writing efficient PySpark code to handle the task. Ideal candidates should have extensive experience with PySpark and a strong background in data analysis and transformations. Proficiency in working with structured data from sources like CSV files, SQL tables, and Excel files is crucial.

    €149 Average bid
    €149 Ortalama Teklif
    37 teklifler

    I am looking for a professional to design a series of training videos for beginners on Python, SQL, Pyspark, ADF, Azure Data Bricks, and Snowflake. The primary goal of these videos is to teach the fundamental principles and techniques associated with each of these technologies. As such, the curriculum for each technology will need to be developed from scratch, ensuring that it covers all the necessary topics in a clear and engaging manner. Key responsibilities include: - Developing a detailed curriculum for each technology - Creating high-quality video content - Providing thorough explanations in PDF format - Incorporating our logo into each video Ideal candidates should have: - A strong background in IT, with a focus on the technologies listed - A proven track record in creat...

    €20 / hr Average bid
    €20 / hr Ortalama Teklif
    21 teklifler

    I am looking for a professional to design a series of training videos for beginners on Python, SQL, Pyspark, ADF, Azure Data Bricks, and Snowflake. The primary goal of these videos is to teach the fundamental principles and techniques associated with each of these technologies. As such, the curriculum for each technology will need to be developed from scratch, ensuring that it covers all the necessary topics in a clear and engaging manner. Key responsibilities include: - Developing a detailed curriculum for each technology - Creating high-quality video content - Providing thorough explanations in PDF format - Incorporating our logo into each video Ideal candidates should have: - A strong background in IT, with a focus on the technologies listed - A proven track record in creat...

    €608 Average bid
    €608 Ortalama Teklif
    21 teklifler

    I need an AWS Glue job written in PySpark. The primary purpose of this job is transforming data stored in my S3 bucket Ideal Skills: - Proficient in PySpark and AWS Glue - Experience with data transformation and handling S3 bucket data Your bid should showcase your relevant experience and approach to this project.

    €20 / hr Average bid
    €20 / hr Ortalama Teklif
    94 teklifler

    I have a PySpark code that requires optimization primarily for performance. Key requirements: - Enhancing code performance to handle large datasets efficiently. - The code currently interacts with data stored in Azure Data Lake Storage (ADLS). - Skills in PySpark, performance tuning, and experience with ADLS are essential. - Understanding of memory management in large dataset contexts is crucial. Your expertise will help improve the code's efficiency and ensure it can handle larger datasets without performance issues.

    €90 Average bid
    €90 Ortalama Teklif
    16 teklifler

    ...candidate will have hands-on experience in architecting cloud data solutions and a strong background in data management and integration. If you are passionate about working with cutting-edge technologies and have a proven track record in the Azure ecosystem, We would love to hear from you! Key Responsibilities : -Cloud data solutions using Azure Synapse, Databricks, AzureDataFactory, DBT Python, PySpark, and SQL -Set up ETL Pipelines in Azuredatafactory -Set up data model in Azure data bricks / Synapse -Design and manage cloud data lakes, data warehousing solutions, and data models. -Develop and maintain data integration processes. -Collaborate with cross-functional teams to ensure alignment and successful project delivery. Qualifications : -Good understanding of data warehousing...

    €22 / hr Average bid
    €22 / hr Ortalama Teklif
    23 teklifler

    I'm looking for a professional who's proficient in AWS Glue, S3, Redshift, Azure Data Bricks, PySpark, and SQL. The project entails working on data transformation and integration, data analysis and processing, database optimization, infrastructure setup and management, continuous data processing, and query optimization. The expected data volume is classified as medium, ranging from 1GB to 10GB. Ideal Skills and Experience: - Strong experience in AWS Glue, S3, and Redshift - Proficiency in Azure Data Bricks, PySpark, and SQL - Proven track record with data transformation and integration - Expertise in database optimization and query optimization - Experience with managing and setting up infrastructure for data processing - Ability to handle continuous data processi...

    €284 Average bid
    €284 Ortalama Teklif
    11 teklifler

    I am looking for a data engineer to help me build data engineering pip...build data engineering pipelines in Microsoft Fabric using the Medallion Architecture. The primary goal of these pipelines is to perform ELT (Extract, Load, Transform). Key Responsibilities: - Design and implement data engineering pipelines via Microsoft Fabric. - Utilize the Medallion Architecture to optimize data flow and processing. - Creating separate workspaces for each layer and lakehouse - Pyspark to write jobs Ideal Skills and Experience: - Extensive experience with Microsoft Fabric. - Strong understanding and experience with ELT processes. - Familiarity with Medallion Architecture. - Able to work with both structured data and Json. - understand how to connect and work across workspaces and lakehouses...

    €13 / hr Average bid
    Acil
    €13 / hr Ortalama Teklif
    22 teklifler

    I'm looking for a seasoned Databricks professional to assist with a data engineering project focused on the migration of structured data from cloud storage. Key Responsibilities: - Lead the migration of structured data from our cloud storage to the target environment - Utilize Pyspark for efficient data handling - Implement DevOps practices for smooth and automated processes Ideal Skills: - Extensive experience with Databricks - Proficiency in Pyspark - Strong understanding of DevOps methodologies - Prior experience in data migration projects - Ability to work with structured data Please, only apply if you meet these criteria, and can provide examples of similar projects you have successfully completed.

    €69 Average bid
    €69 Ortalama Teklif
    7 teklifler

    ...and implement a data processing pipeline on Azure - Ensure the pipeline is capable of handling structured data, particularly from SQL databases - Optimize the pipeline for reliability, scalability, and performance Ideal Skills and Experience: - Extensive experience with Azure cloud services, particularly in a data engineering context - Proficiency in data processing tools such as Scala-Spark, Pyspark - Strong understanding of Unix/Linux systems and SQL - Prior experience working with Data Warehousing, Data Lake, and Hive systems - Proven track record in developing complex data processing pipelines - Excellent problem-solving skills and ability to find innovative solutions to data processing challenges This role is suited to a freelancer who is not only a technical expert in clo...

    €18 / hr Average bid
    €18 / hr Ortalama Teklif
    29 teklifler

    I'm in search of a Azure Data Factory expert who is well-versed in Delta tables, Parquet, and Dedicated SQL pool. As per the requirement, I have all the data and specifications ready. The successful freelancer will need to be familiar with advanced transformations as the ETL complex...expert who is well-versed in Delta tables, Parquet, and Dedicated SQL pool. As per the requirement, I have all the data and specifications ready. The successful freelancer will need to be familiar with advanced transformations as the ETL complexity level is high. It's a plus if you have prior and proven experience in handling such projects. Key Skills Required: - Expertise in Azure Data Factory -pyspark - Deep knowledge of Delta tables, Parquet and Dedicated SQL pool - Familiarity with adv...

    €485 Average bid
    €485 Ortalama Teklif
    50 teklifler

    I'm looking for a talented Pyspark Developer who has experience in working with large datasets and is well-versed in PySpark above version 3.0. The primary task involves creating user-defined function code in PySpark for applying cosine similarity on two text columns. Key Requirements: - Handling large datasets (more than 1GB) efficiently - Proficient in PySpark (above version 3.0) - Experienced in implementing cosine similarity - Background in health care data is a plus Your primary responsibilities will include: - Writing efficient and scalable code - Applying cosine similarity on two text columns - Ensuring the code can handle large datasets This project is a great opportunity for a Pyspark Developer to showcase their skills in handling big da...

    €79 Average bid
    €79 Ortalama Teklif
    10 teklifler

    I require a highly skilled AWS data engineer who can provide on-demand consultation for my data processing needs. The project involves helping me manage large volumes of data in AWS using Python, SQL, Pyspark, Glue, and Lambda. This is a long-term hourly consulting job, where I will reach out to you when I need guidance on any of the following areas: - Data Ingestion: The initial process of collecting and importing large volumes of data into AWS. - Data Transformation: The process of converting and reformatting data to make it suitable for analysis and reporting. - Data Warehousing: The ongoing management and storage of transformed data for analysis purposes. Your role will be to assist me in making critical decisions about data architecture and processing, using the tools and lan...

    €15 / hr Average bid
    €15 / hr Ortalama Teklif
    56 teklifler

    I'm on a quest for an expert in big data, specifically in areas of data storage, processing, and query optimization. The ideal candidate would be required to: _ Need someone who is experienced in PySpark - Manage the storage and processing of my large datasets efficiently. Foremost in this requirement is a dynamic understanding of big data principles as relates to data storage and processing. - Kick in with your expertise in PostgreSQL by optimizing queries for improved performance and efficiency in accessing stored data. - Using Apache Hive, you'll be tasked with data summarization, query, and in-depth analysis. This entails transforming raw data into an understandable format and performing relevant calculations and interpretations that enable insightful decisions. Skil...

    €13 Average bid
    €13 Ortalama Teklif
    4 teklifler

    I'm looking for an expert PySpark developer to help manage and process big data sets on AWS. The successful candidate will have a strong knowledge of key AWS services such as S3, Lambda, and EMR. Ingest the data from source CSV file to target delta tables Tasks include: - Building and managing large scale data processes in PySpark - Understanding and using AWS services like S3, Lambda and EMR - Implementing algorithms for data computation Ideally, you'll have: - Expertise in PySpark development - In-depth knowledge of AWS services, specifically S3, Lambda and EMR - Proven experience in handling and processing big data - A problem-solving approach with excellent attention to detail. Your experience should allow you to hit the ground running on this data p...

    €89 Average bid
    €89 Ortalama Teklif
    14 teklifler

    Full Stack PYTHON Developer within a growing tech start-up organisation focused on transforming the professional services industry. Our stack includes Python, React, JavaScript, TypeScript, GraphQL, Pandas, NumPy, PySpark and many other exciting technologies so plenty of scope to grow your skills. We're looking for someone experienced with Python, React, Material UI, Redux, Service Workers, Fast API, Django, Flask, Git and Azure. We'd also like this person to have a proven track record working as a fullstack developer or similar role with strong problem solving skills, attention to detail and initiative to get things done. Fully remote team working across the globe but with a fantastic team culture. This is not a project role but an open ended requirement so you really...

    €533 Average bid
    €533 Ortalama Teklif
    187 teklifler

    ...using Azure Data Factory (ADF). Optimize data transformation processes using PySpark. Production experience delivering CI/CD pipelines across Azure and vendor products. Contribute to the design and development of enterprise standards. Key knowledge of architectural patterns across code and infrastructure development. Requirements: Technical Skills and Experience: Bachelor’s or master’s degree in computer science, Engineering, Data Science, or equivalent experience, with a preference for experience and a proven track record in advanced, innovative environments. 7-8 years of professional experience in data engineering. Strong expertise in Microsoft Azure data services, particularly Azure Data Factory (ADF) and PySpark. Experience with data pipeline design, deve...

    €9327 Average bid
    €9327 Ortalama Teklif
    15 teklifler

    Hi, You will be working for 2 hours on a daily basis with the developer on zoom call Please confirm following - Early morning est 7 am to 9 am ist Daily 2 hours, Zoom call budget approx 500 /hr Required skills- Data Engineer/ Databricks Developer: Python, spark, pyspark, SQL Azure cloud, data factory Scala Terraform Kubernetes

    €5 / hr Average bid
    €5 / hr Ortalama Teklif
    5 teklifler

    I am looking for a professional to design a series of training videos for beginners on Python, SQL, Pyspark, ADF, Azure Data Bricks, and Snowflake. The primary goal of these videos is to teach the fundamental principles and techniques associated with each of these technologies. As such, the curriculum for each technology will need to be developed from scratch, ensuring that it covers all the necessary topics in a clear and engaging manner. Key responsibilities include: - Developing a detailed curriculum for each technology - Creating high-quality video content - Providing thorough explanations in PDF format - Incorporating our logo into each video Ideal candidates should have: - A strong background in IT, with a focus on the technologies listed - A proven track record in creat...

    €602 Average bid
    €602 Ortalama Teklif
    15 teklifler

    Senior Python (Full Stack) Engineer Timezone: 1:30 PM to 10 PM IST What we expect: Strong knowledge of Python Experience with one of backend frameworks (Flask/Django/FastAPI/Aiohttp) Experience with one of the modern ...frameworks (React, Angular, , Vue.js) Experience with AWS Cloud database related experience (NoSQL, relational DBs) Good understanding of application architecture principles Good written and verbal skills in English (upper-intermediate or higher) Nice to have: Knowledge of and experience in working with Kubernetes Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) Experience with CI/CD systems Experience with Linux/Unix Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, et...

    €747 Average bid
    €747 Ortalama Teklif
    24 teklifler

    I'm looking for someone with solid experience in Google Cloud Platform (GCP) and Databricks, specifically for data processing and analytics. Your primary responsibility will be to translate SQL code to Spark SQL and adapt, so this experience is crucial. Key Responsibilities: - Translating SQL code to Spark SQL, and adapting as necessary - Working with Google Cloud Platform (GCP) and Databricks - Data processing and analytics Ideal Skills and Experience: - Strong experience in Google Cloud Platform (GCP) and Databricks - Proficient in SQL and Spark SQL - Previous experience working on data processing and analytics projects - A solid understanding of cloud storage and databases - Ability to efficiently adapt SQL code to Spark SQL Please apply if you have the required skills and exp...

    €17 Average bid
    €17 Ortalama Teklif
    10 teklifler

    I'm looking for someone with solid experience in Google Cloud Platform (GCP) and Databricks, specifically for data processing and analytics. Your primary responsibility will be to translate SQL code to Spark SQL and adapt, so this experience is crucial. Key Responsibilities: - Translating SQL code to Spark SQL, and adapting as necessary - Working with Google Cloud Platform (GCP) and Databricks - Data processing and analytics Ideal Skills and Experience: - Strong experience in Google Cloud Platform (GCP) and Databricks - Proficient in SQL and Spark SQL - Previous experience working on data processing and analytics projects - A solid understanding of cloud storage and databases - Ability to efficiently adapt SQL code to Spark SQL Please apply if you have the required skills and exp...

    €45 Average bid
    €45 Ortalama Teklif
    3 teklifler

    I'm looking for someone with solid experience in Google Cloud Platform (GCP) and Databricks, specifically for data processing and analytics. Your primary responsibility will be to translate SQL code to Spark SQL and adapt, so this experience is crucial. Key Responsibilities: - Translating SQL code to Spark SQL, and adapting as necessary - Working with Google Cloud Platform (GCP) and Databricks - Data processing and analytics Ideal Skills and Experience: - Strong experience in Google Cloud Platform (GCP) and Databricks - Proficient in SQL and Spark SQL - Previous experience working on data processing and analytics projects - A solid understanding of cloud storage and databases - Ability to efficiently adapt SQL code to Spark SQL Please apply if you have the required skills and exp...

    €33 Average bid
    €33 Ortalama Teklif
    18 teklifler

    ...have a high-complexity T-SQL stored procedure used for data analysis that I need translated into PySpark code. The procedure involves advanced SQL operations, temporary tables, and dynamic SQL. It currently handles over 10GB of data. - Skills Required: - Strong understanding and experience in PySpark and T-SQL languages - Proficiency in transforming high complexity SQL scripts to PySpark - Experience with large volume data processing - Job Scope: - Understand the functionality of the existing T-SQL stored procedure - Rewrite the procedure to return the same results using PySpark - Test the new script with the provided data set The successful freelancer will assure that the new PySpark script can handle a large volume of data and maintai...

    €153 Average bid
    €153 Ortalama Teklif
    15 teklifler

    conversion modeling / predictive analytics. The whole department is transitioning to DataBricks. I need help with creating conversion models using pyspark. Compare the results to last year and what could have been a better approach.

    €96 Average bid
    €96 Ortalama Teklif
    8 teklifler

    I'm looking for a data engineer with solid Pyspark knowledge to assist in developing a robust data storage and retrieval system, primarily focusing on a Data Warehouse. Key Responsibilities: - Implementing efficient data storage solutions for long-term retention and retrieval - Ensuring data quality and validation procedures are in place - Advising on real-time data processing capabilities Ideal Candidate: - Proficient in Pyspark with hands-on experience in data storage and retrieval projects - Familiar with Data Warehousing concepts and best practices - Able to recommend and implement appropriate real-time processing solutions - Strong attention to detail and commitment to data quality. Specifically, I have a Jira ticket that consists of creating an application tha...

    €423 Average bid
    €423 Ortalama Teklif
    13 teklifler

    I'm seeking a knowledgeable Databricks Data Engineer to expertly navigate Python and Pyspark programming languages for my project. Your primary task will be optimize Delta Live Tables pipeline that is processing real-time data processing, optimization, and change data capture (CDC). An extensive working knowledge of Azure cloud platform is a must for this role. Your understanding and ability to apply crucial elements in these areas will greatly contribute to the success of this project. Applicants with proven experience in this field are preferred. In your proposal state if you have DLT experience else you wont be considered.

    €15 / hr Average bid
    €15 / hr Ortalama Teklif
    9 teklifler

    As the professional handling this project, you'll engage with big data exceeding 10GB. Proficiency in Python, Java, and Pyspark are vital for success as we demand expertise in: - Data ingestion and extraction: The role involves managing complex datasets and running ETL operations. - Data transformation and cleaning: You'll also need to audit the data for quality and cleanse it for accuracy, ensuring integrity throughout the system. - Handling Streaming pipelines and Delta Live Tables: Mastery of these could be game-changing in our pipelines, facilitating the real-time analysis of data.

    €17 / hr Average bid
    €17 / hr Ortalama Teklif
    35 teklifler

    I'm in need of a Machine Learning Engineer who can migrate our existing notebooks from RStudio and PySpark to AWS Sagemaker. Your task will be to: - Understand two models I have running locally. One is a Rstudio logistic regression model, and the other is a pySpark XGboost also running on local. - Migrate These two models to AWS SAGEMAKER. Data will be on S3 -Prepare models to run on sagemaker totally, so that we can do training and testing 100% on sagemaker.-Models are already running on a local computer, but I need to move them to Sagemaker 100%. Data is on S3 already. -You need to configure and prepare Sagemaker from end to end, and teach me how you did it, since I need to replicate it in another system. -I will give you the data and access to AWS Ideal Skills and...

    €205 Average bid
    €205 Ortalama Teklif
    13 teklifler

    The Data Engineer contractor role will be a project based role focused on migrating data pipelines from legacy infrastructure and frameworks such as Scalding to more modern infrastructure we support such as Spark Scala. This role will be responsible for: Analyzing existing data pipelines to understand their architecture, dependenci...Requirements The ideal candidate is a Data Engineer with considerable experience in migrations and Big Data frameworks. Must-Haves Scala programming language expertise Spark framework expertise Experience working with BigQuery Familiarity scheduling jobs in Airflow Fluency with Google Cloud Platform, in particular GCS and Dataproc Python programming language fluency Scalding framework fluency Pyspark framework fluency Dataflow(Apache Beam) framewor...

    €278 Average bid
    €278 Ortalama Teklif
    8 teklifler

    Hi, Please apply only individual. Agency can apply but budget should not be more than mentioned. Role : GCP Engineer (OTP) Exp : 7 + yrs SHIFT: IST Cloud Storage Buckets, BigQuery (SQL, Data Transformations and movement) Airflow (python, DAGs), DBT IAM Policies PyCharm Databricks (pySpark), Azure DevOps Clear and confident communication

    €1302 Average bid
    €1302 Ortalama Teklif
    8 teklifler

    I'm in need of a Machine Learning Engineer who can migrate our existing notebooks from RStudio and PySpark to AWS Sagemaker. Your task will be to: - Understand two models I have running locally. One is a Rstudio logistic regression model, and the other is a pySpark XGboost also running on local. - Migrate These two models to AWS SAGEMAKER. Data will be on S3 -Prepare models to run on sagemaker totally, so that we can do training and testing 100% on sagemaker.-Models are already running on a local computer, but I need to move them to Sagemaker 100%. Data is on S3 already. -You need to configure and prepare Sagemaker from end to end, and teach me how you did it, since I need to replicate it in another system. -I will give you the data and access to AWS Ideal Skills and...

    €366 Average bid
    Gizlilik Anlaşması
    €366 Ortalama Teklif
    3 teklifler

    The Data Engineer contractor role will be a project based role focused on migrating data pipelines from legacy infrastructure and frameworks such as Scalding to more modern infrastructure we support such as Spark Scala. This role will be responsible for: Analyzing existing data pipelines to understand their architecture, dependenci...Requirements The ideal candidate is a Data Engineer with considerable experience in migrations and Big Data frameworks. Must-Haves Scala programming language expertise Spark framework expertise Experience working with BigQuery Familiarity scheduling jobs in Airflow Fluency with Google Cloud Platform, in particular GCS and Dataproc Python programming language fluency Scalding framework fluency Pyspark framework fluency Dataflow(Apache Beam) framewor...

    €279 Average bid
    €279 Ortalama Teklif
    10 teklifler

    I am looking for a dedicated specialist well-versed in using Databricks and PySpark for data processing tasks, with a primary focus on data transformation. With the provision of JSON format files, you will perform following tasks: - Carry out complex data transformations - Implement unique algorithms to ensure efficient data processing - Test results against required benchmarks Ideal Skills: - Proficient in Databricks and PySpark. - Must possess a solid background in data transformation. - Experience handling large JSON datasets. The end goal is to achieve seamless data transformation leveraging the power of Databricks and PySpark, enhancing our ability to make informed business decisions. Please provide your completed projects, and the strategies you've used ...

    €44 / hr Average bid
    €44 / hr Ortalama Teklif
    27 teklifler

    ...functions to handle data quality and validation. -Should have good understanding on S3,Cloud Formation, Cloud Watch, Service Catalog and IAM Roles -Perform data validation and ensure data accuracy and completeness by creating automated tests and implementing data validation processes. -Should have good knowledge about Tableau, with creating Tableau Published Datasets and managing access. -Write PySpark scripts to process data and perform transformations.(Good to have) -Run Spark jobs on AWS EMR cluster using Airflow DAGs.(Good to have)...

    €1355 Average bid
    €1355 Ortalama Teklif
    22 teklifler

    ...Stay current with new technology options and vendor products, evaluating which ones would be a good fit for the company Troubleshoot the system and solve problems across all platform and application domains Oversee pre-production acceptance testing to ensure the high quality of a company’s services and products Skill Sets: Strong development experience in AWS Step Functions, Glue, Python, S3, Pyspark Good understanding of data warehousing, Large-scale data management issues, and concepts. Good experience in Data Analytics & Reporting and Modernization project Expertise in at least one high-level programming language such as Java, Python Skills for developing, deploying & debugging cloud applications Skills in AWS API, CLI and SDKs for writing applications Knowledge...

    €656 Average bid
    €656 Ortalama Teklif
    26 teklifler

    I am in need of a proficient PySpark coder to aid in debugging errors present within my current code. The main focus of this project is optimization and troubleshooting. Unfortunately, I can't specify the type of errors– I need a professional to help identify and rectify them. If you are an experienced PySpark coder with a keen eye for bug identification and problem solving, I'd appreciate your expertise.

    €6 - €18
    Mühürlü
    €6 - €18
    10 teklifler

    I am in need of a proficient PySpark coder to aid in debugging errors present within my current code. The main focus of this project is optimization and troubleshooting. Unfortunately, I can't specify the type of errors– I need a professional to help identify and rectify them. If you are an experienced PySpark coder with a keen eye for bug identification and problem solving, I'd appreciate your expertise.

    €6 - €18
    Mühürlü
    €6 - €18
    6 teklifler

    I'm searching for a PySpark expert who can provide assistance on optimizing and debugging current PySpark scripts. I am specifically focused on PySpark, so expertise in this area is crucial for the successful completion of this project. Key Responsibilities: - Optimizing PySpark scripts to improve efficiency - Debugging current PySpark scripts to resolve existing issues Ideal Candidate: - Proficient with PySpark - Experience in big data management, data ingestion, processing, analysis, visualization, and reporting - Strong problem-solving skills to identify and resolve issues effectively - Knowledgeable in performance tuning within PySpark.

    €93 Average bid
    €93 Ortalama Teklif
    65 teklifler

    I'm looking for a skilled freelancer to create a Spark script that transfers data from a Hive metastore to an S3 bucket. The goal of this project is to enable backup and recovery. Skills and Experience: - Proficiency in Spark and Hive - Extensive experience with S3 buckets - Understanding of data backup strategies Project Details: - The script needs to read the schema and perform metadata transfer for selected schema to s3 bucket. - Only bid if you have work experience with spark, hive, s3 - 4 schemas needs to be migrated - I have already got access to s3 configured - I have local instance of netapp s3 available and bucket created. - Server is Ubuntu

    €91 Average bid
    €91 Ortalama Teklif
    10 teklifler

    I am looking for an experienced data analyst who is well-versed in PySpark to clean up a medium-sized dataset in a CSV file format. The file contains between 10k-100k rows, and your primary role will be to: - Remove duplicate data entries - Deduplicate the dataset - Handle missing values - Aggregate the resultant data Your proficiency in using PySpark to automate these processes efficiently will be critical to the success of this project. Therefore, prior experience in handling and cleaning similar large datasets would be beneficial. Please note, this project requires precision, meticulousness, and a good understanding of data aggregation principles.

    €23 Average bid
    €23 Ortalama Teklif
    9 teklifler

    This vital task entails cleaning and sorting two CSV files of approximately 100,000 rows and second one of about 1.5million rows using pyspark (Python) in Jupyter Notebook(s). The project consists of several key tasks: Read in both datasets and then: - Standardizing data to ensure consistency - Removal of duplicate entries - Filtering columns we need - Handling and filling missing values - Aggregating data on certain groupings as output Important requirement: I also need unit tests to be written for the code at the end. Ideal Skills: Candidates applying for this project should be adept with Pyspark in Python and have experience in data cleaning and manipulation. Experience with working on datasets of similar size would also be preferable. Attention to detail in ensuring ...

    €172 Average bid
    €172 Ortalama Teklif
    53 teklifler

    I'm seeking an experienced Data Engineer with proficiency in SQL and PySpark. Key Responsibilities: - Develop and optimize our ETL processes. - Enhance our data pipeline for smoother operations. The ideal candidate should deliver efficient extraction, transformation, and loading of data, which is critical to our project's success. Skills and Experience: - Proficient in SQL and PySpark - Proven experience in ETL process development - Previous experience in data pipeline optimization Your expertise will significantly improve our data management systems, and your ability to deliver effectively and promptly will be highly appreciated.

    €84 Average bid
    €84 Ortalama Teklif
    17 teklifler