Freelancer logosuNasıl İşlerİşlere Göz Atın Giriş Kaydolun Bir Proje İlan Edin
KEŞFEDİN
Big Data Sales Hadoop Java Machine Learning (ML) Python
Profile cover photoundefined
Şu anda adlı kullanıcıyı takip ediyorsunuz.
Kullanıcıyı takip etmede hata.
Bu kullanıcı kullanıcıların kendisini takip etmesine izin vermiyor.
Zaten bu kullanıcıyı takip ediyorsunuz.
Üyelik planınız sadece 0 takibe izin veriyor. Buradan yükseltme yapın.
Kullanıcının takip edilmesi başarılı bir şekilde bırakıldı.
Kullanıcının takip edilmesinde hata.
adlı kullanıcıyı başarılı bir şekilde tavsiye ettiniz
Kullanıcıyı tavsiye etmede hata.
Bir şeyler yanlış gitti. Lütfen sayfayı yenileyin ve tekrar deneyin.
E-posta başarılı bir şekilde doğrulandı.
Kullanıcı Avatarı
$35 USD / saat
   INDIA bayrağı
saharanpur, india
$35 USD / saat
Şu anda burada saat 9:56 ÖÖ
Kasım 4, 2012 tarihinde katıldı
3 Tavsiye

Mohd T.

@tausy

4,9 (110 değerlendirme)
6,4
6,4
$35 USD / saat
   INDIA bayrağı
saharanpur, india
$35 USD / saat
%96
Tamamlanmış İş
%83
Bütçe Dahilinde
%95
Zamanında
%17
Tekrar İşe Alınma Oranı

ML, AI, Data Science, Python, Hadoop, Databases

- Data Scientist with over 7 years of industry experience, knowledge and understanding of Machine Learning, Data Analysis, Big Data/Hadoop, ETL, and Databases. - I hold a Master's degree in Data Science from Trinity College Dublin and a Bachelor's degree in Computer Science. - Currently, working as a data scientist with one of the world's largest banking and financial firm. - Solid understanding and expertise in analyzing and maintaining large datasets. - Honed my skills in Data Ingestion, Data Analysis, Data Migration, Data Consolidation, Data Processing, Data Visualization, and Data Mining. - In my 7 years of career, I worked primarily on Predictive Modeling, Machine Learning, and Hadoop to deliver cutting-edge predictive models in Healthcare, Aviation, and Financial sectors. - Extensive experience in building machine learning applications using Python and its ML stack libraries including NumPy, Pandas, Scikit-Learn, Matplotlib, etc. - Development and implementation experience of building data analytics pipelines and ML systems using PySpark on big data. - Worked extensively on Big Data and Hadoop stack tools including but not limited to Sqoop, Flume, Oozie, Hive, Impala, HDFS, and Map Reduce. - Worked on numerous projects of SQL, PL/SQL, ETL, Informatica, SSIS, and Informatica DIH for years. - Proficient in Java, and Python programming languages. Also, work on R statistical language. - Current areas of interest include Data Science, Data Analytics, Machine Learning, Predictive Modeling, Knowledge Discovery from Databases(KDD), Data Mining, Web Mining, and Information Retrieval.
Freelancer Python Developers India

Mohd T. ile işiniz hakkında iletişime geçin

Sohbet üzerinden herhangi bir detayı tartışmak için giriş yapın.

Portföy Ögeleri

Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning

Değerlendirmeler

Değişiklikler kaydedildi
50+ değerlendirme içinden 1 - 5 arasındakiler gösteriliyor
Değerlendirmeleri filtreleme kriteri: 5,0
₹16.000,00 INR
He submitted the project within the provided time frame!
Python Machine Learning (ML)
S
    bayrağı Sri D. @srideepthisd3
2 ay önce
5,0
$50,00 USD
Great and fast work; communication with him is quick and convenient. I am happy to work with him.
Python NLP
I
    bayrağı N A. @inourah14
4 ay önce
4,8
₹20.000,00 INR
It was really nice to work with Mohd. T. I would love to recommend this coder for all your relevant projects. Looking forward to work with him on my future projects.
Python Data Processing Excel Microsoft Access MySQL
Kullanıcı Avatarı
    bayrağı Siraj M. @sirajmultani
8 ay önce
5,0
$120,00 USD
The works are impeccable. Delivered on time and comply with everything requested. One of the best freelancer on this site.
Java Python Machine Learning (ML) Big Data Sales
+1 daha
Kullanıcı Avatarı
    bayrağı F. O. @fortizclavijo
9 ay önce
5,0
$350,00 SGD
Hired him to work on one of my projects. He was able to deliver the project proposal, project poster, artefact and report ahead of time. Guided me all the way when setting up the environment and running the program. Friendly approach made it easier to deal with him.
Python Software Architecture Report Writing Machine Learning (ML) Statistical Analysis
Kullanıcı Avatarı
    bayrağı Albin V. @albinvarghese
10 ay önce

Deneyim

Data Scientist

Citibank Europe
Ara 2019 - Şu anda
Working as a data scientist in AI/ML team.

Hadoop/Machine Learning Developer

Opera Solutions
Eyl 2017 - Şu anda
Working on Hadoop Ecosystem in combination with python/machine learning to deliver predictive models.

Hadoop Developer

Tata Global Delivery Center SA, Montevideo, Uruguay, SA
Nis 2016 - Ağu 2017 (1 yıl, 4 ay)
Worked on Hadoop ecosystem to deliver cutting edge predictive models using Sqoop, Flume, Oozie, Hive, Map Reduce

Eğitim

MSc Data Science

Trinity College, Dublin, Ireland 2018 - 2019
(1 yıl)

Bachelor Of Technology (Computer Engineering)

Jamia Millia Islamia, India 2009 - 2013
(4 yıl)

Nitelikler

Certificate in Healthcare

Tata Business Domain Academy
2014

Oracle Database Certified SQL Expert

Oracle University
2015
SQL proficiency test certificate provided by Oracle

Oracle Database Certified PL/SQL Expert

Oracle University
2015
Pl/SQL proficiency test certificate provided by Oracle

Mohd T. ile işiniz hakkında iletişime geçin

Sohbet üzerinden herhangi bir detayı tartışmak için giriş yapın.

Doğrulamalar

Tercih Edilen Freelancer
Kimlik Onaylı
Ödeme Onaylı
Telefon Onaylı
E-posta Onaylı
Facebook Bağlantılı

Sertifikalar

preferredfreelancer-1.png Preferred Freelancer Program SLA 1 92% SQL_1.png SQL 1 90% java_1.png Java 1 87% SQL_2.png SQL 2 85% python-1.png Python 1 80%

Önde Gelen Beceriler

Python 76 Java 59 Big Data Sales 45 Hadoop 41 Machine Learning (ML) 17

Benzer Freelancerlara Göz Atın

Python Developers in India
Python Developers
Java Developers
Big Data Salespeople

Benzer Vitrinlere Göz Atın

Python
Java
Big Data Sales
Hadoop
Önceki Kullanıcı
Sonraki Kullanıcı
Davet başarılı bir şekilde gönderildi!
Kayıtlı Kullanıcı İlan Edlien Toplam İş
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2023 Freelancer Technology Pty Limited (ACN 142 189 759)
Ön izleme yükleniyor
Coğrafik konum için izin verildi.
Giriş oturumunuzun süresi doldu ve çıkış yaptınız. Lütfen tekrar giriş yapın.