Hadoop asonetработы
Добрый день! Вы еще рассматриваете предложения о работе? Хотела предложить Вам удаленную вакансию в Вашей сфере: ETL-разработчик Опыт:Oracle/Hadoop/Teradata; SQL; Python; Spark; Hive/Airflow/Grafana. Заработная плата : 130-150 000 т.р Локация: Москва С уважением, Внешний рекрутер компании
Добрый день! Вы еще рассматриваете предложения о работе? Хотела предложить Вам удаленную вакансию в Вашей сфере: Тестировщик Опыт:Oracle/Hadoop/Teradata; SQL/Python/airflow/PostgreSQL; Заработная плата : 130-150 000 т.р Локация: Москва С уважением, Внешний рекрутер компании
Нужен SW архитектор на проект с успешным опытом разработки enterprise решений с использованием Hadoop (наличие портфолио и референсов обязательно)
Наша компания ищет Java Hadoop Developer для работы в офисе в Киеве. Возможна удаленная работа. jobs. dou. ua/companies/ta-venture/vacancies/16952/ Если вам интересно, добавляйте меня в Skype (ID-estmilitare) Бюджет вказаний невірно, більш детально при переписці Спасибо. Юлия
Наша компания ищет Java Hadoop Developer для работы в офисе в Киеве. Возможна удаленная работа. jobs. dou. ua/companies/ta-venture/vacancies/16952/ Если вам интересно, добавляйте меня в Skype (ID-estmilitare) Спасибо. Юлия
...хранения результатов измерений (пока масштабы в пределах нескольких терабайт) - система построения отчетов (по агрегированным данным) и просмотра их через веб-интерфейс В общем, если ты: - можешь кодить на любом языке, ну или умеешь программировать хотя бы на двух языках из списка: php, perl, python, ruby, c++, bash; - представляешь, как работает tcp/ip и http; - знаешь такие слова как hadoop, mongodb и интересуешься большими нагруженными системами; - настраивал и эксплуатировал разные ОС семейства unix и хотя бы в общих чертах понимаешь, чем они отличаются; - аккуратен, любишь порядок и даже небольшие проекты предпочитаешь вести в git или svn; - интересуешься новыми технологиями и активно развиваешься (!) То, вероятно, ты тот человек, которого я ищу! Думаю,...
Срочно требуется программа для умножения матрицы на матрицу в программе Hadoop на кластере. Матрица квадратная, размер 10x10, или более. Оплата договорная. Доплата за срочность. Пожалуйста откликнитесь
Big data project in java needed to be done in 24 hrs. Person needs to be experienced in spark. hadoop.
Big data project in java needed to be done in 24 hrs. Person needs to be experienced in spark. hadoop.
Looking for hadoop specialist to design the query optimisation design . Currently when the search is made its getting freezing when the user tries to run more than one search at a time . Need to implement a solution . This is a remote project . Share your idea first if you have done any such work . Here the UI is in React and Backend is in Node js .
#Your code goes here import '' import '' def jbytes(*args) { |arg| arg.to_s.to_java_bytes } end def put_many(table_name, row, column_values) table = (@, table_name) p = (*jbytes(row)) do |column, value| family, qualifier = (':') (jbytes(family, qualifier), jbytes(value)) end (p) end # Call put_many function with sample data put_many 'wiki', 'DevOps', { "text:" => "What DevOps IaC do you use?", "revision:author" => "Frayad Gebrehana", "revision:comment" => "Terraform" } # Get data from the 'wiki' table get 'wiki', 'DevOps' #Do not remove the exit call below exit
I am in need of assistance with Hadoop for the installation and setup of the platform. Skills and experience required: - Proficiency in Hadoop installation and setup - Knowledge of different versions of Hadoop (Hadoop 1.x and Hadoop 2.x) - Ability to work within a tight timeline (project needs to be completed within 7 hours) Please note that there is no specific preference for the version of Hadoop to be used.
Wordpress Black theme Design in photo Images can take from udemy Content here Content Coupon Code: 90OFFOCT23 (subscribe by 7 Oct’23 or till stock lasts) Data Engineering Career Path: Big Data Hadoop and Spark with Scala: Scala Programming In-Depth: Apache Spark In-Depth (Spark with Scala): DP-900: Microsoft Azure Data Fundamentals: Data Science Career Path: Data Analysis In-Depth (With Python): https://www
Seeking an expert in both Hadoop and Spark to assist with various big data projects. The ideal candidate should have intermediate level expertise in both Hadoop and Spark. Skills and experience needed for the job: - Proficiency in Hadoop and Spark - Intermediate level expertise in Hadoop and Spark - Strong understanding of big data concepts and tools - Experience working on big data projects - Familiarity with data processing and analysis using Hadoop and Spark - Ability to troubleshoot and optimize big data tools - Strong problem-solving skills and attention to detail
I am looking for a freelancer to compare the performance metrics of Hadoop, Spark, and Kafka using the data that I will provide. Skills and experience required: - Strong knowledge of big data processing architectures, specifically Hadoop, Spark, and Kafka - Proficiency in analyzing and comparing performance metrics - Ability to present findings through written analysis, graphs and charts, and tables and figures The comparison should focus on key performance metrics such as processing speed, scalability, fault tolerance, throughput, and latency. The freelancer should be able to provide a comprehensive analysis of these metrics and present them in a clear and visually appealing manner. I will explain more about the data
Looking for Hadoop Hive Experts I am seeking experienced Hadoop Hive experts for a personal project. Requirements: - Advanced level of expertise in Hadoop Hive - Strong understanding of big data processing and analysis - Proficient in Hive query language (HQL) - Experience with data warehousing and ETL processes - Familiarity with Apache Hadoop ecosystem tools (e.g., HDFS, MapReduce) - Ability to optimize and tune Hadoop Hive queries for performance If you have a deep understanding of Hadoop Hive and can effectively analyze and process big data, then this project is for you. Please provide examples of your previous work in Hadoop Hive and any relevant certifications or qualifications. I am flexible with the timeframe for completing the...
I am looking for a Kafka Admin who can assist me with the following tasks: - Onboarding Kafka cluster - Managing Kafka topics and partitions - Its already available in the company and we need to onboard it for our project . -Should be able to Size and scope . - We will start with small data ingestion from Hadoop datalake . -Should be willing to work on remote machine . The ideal candidate should have experience in: - Setting up and configuring Kafka clusters - Managing Kafka topics and partitions - Troubleshooting Kafka performance issues The client already has all the necessary hardware and software for the Kafka cluster setup.
Over the past years, I have devoted myself to a project involving Algorithmic Trading. My system leverages only pricing and volume data at market closing. It studies technical indicators for every stock in the S&P 500 from its IPO date, testing all possible indicator 'settings', as I prefer to call them. This process uncovers microscopic signals that suggest beneficial buying at market close and selling at the next day's close. Any signal with a p-value below 0.01 is added to my portfolio. Following this, the system removes correlated signals to prevent duplication. A Bayesian ranking of signals is calculated, and correlated signals with a lower rank are eliminated. The result is a daily optimized portfolio of buy/sell signals. This system, primarily built with numpy...
I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.
I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.
1: model and implement efficient big data solutions for various application areas using appropriately selected algorithms and data structures. 2: analyse methods and algorithms, to compare and evaluate them with respect to time and space requirements and make appropriate design choices when solving real-world problems. 3: motivate and explai...choices when solving real-world problems. 3: motivate and explain trade-offs in big data processing technique design and analysis in written and oral form. 4: explain the Big Data Fundamentals, including the evolution of Big Data, the characteristics of Big Data and the challenges introduced. 6: apply the novel architectures and platforms introduced for Big data, i.e., Hadoop, MapReduce and Spark complex problems on Hadoop execution pl...
I am looking for a freelancer who can help me with an issue I am fac...who can help me with an issue I am facing with launching Apache Gobblin in YARN. Here are the details of the project: Error Message: NoClassDefFoundError (Please note that this question was skipped, so the error message may not be accurate) Apache Gobblin Version: 2.0.0 YARN Configuration: Not sure Skills and Experience: - Strong knowledge and experience with Apache Gobblin - Expertise in Hadoop,YARN configuration and troubleshooting - Familiarity with Interrupt exception and related issues - Ability to diagnose and resolve issues in a timely manner - Excellent communication skills to effectively collaborate with me and understand the problem If you have the required skills and experience, please bid on thi...
Write MapReduce programs that give you a chance to develop an understanding of principles when solving complex problems on the Hadoop execution platform.
It's java hadoop mapreduce task. The program should run on windows OS. An algorithm must be devised and implemented that can recognize the language of a given text. Thank you.
Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation
Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation
I am looking for an advanced Hadoop trainer for an online training program. I have some specific topics to be covered as part of the program, and it is essential that the trainer can provide in-depth knowledge and expertise in Hadoop. The topics to be discussed include Big Data technologies, Hadoop administration, Data warehousing, MapReduce, HDFS Architecture, Cluster Management, Real Time Processing, HBase, Apache Sqoop, and Flume. Of course, the trainer should also have good working knowledge about other Big Data topics and techniques. In addition to the topics mentioned, the successful candidate must also demonstrate the ability to tailor the course to meet the learner’s individual needs, making sure that the classes are engaging and fun. The trainer must ...
I am looking for a freelancer with some experience in working with Hadoop and Spark, specifically in setting up a logging platform. I need full assistance in setting up the platform and answering analytical questions using log files within Hadoop. Ideal skills and experience for this project include: - Experience working with Hadoop and Spark - Knowledge of setting up logging platforms - Analytical skills to answer questions using log files
Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: 1- Using Hadoop SPARK software execute three examples: simple, moderate, and advanced over the chosen DS. 2- For each case write the code and screenshot for the output. 3- Visualize the results of each example with appropriate method.
...procorpsystem rn <<mail here >> rn Reply to: << mail here >>rn rn rn rn rn rn rn rn rn Role: Hadoop Developer / Admin with Production SupportLocation: Austin, TXDuration: 12 Months Job Description: We are looking someone having strong experience in production support, administration and Development experience with Hadoop technologies.• Minimum Experience 8 Years• Must have Hands-on experience on managing Multiple Hortonworks Clusters. Troubleshooting, Maintaining and Monitoring is the key responsibility here.• M...
...System Development Engineer to join our team for a remote project in Korea. The ideal candidate should have expertise in Hadoop, HDFS. Responsibilities: - Coding and implementing HDFS system development tasks - Collaborating with the team to design and develop efficient solutions - Conducting thorough testing and debugging to ensure system reliability [Project Overview] We are looking for skilled professionals capable of enhancing and developing the HDFS technology used in the NMS (Network Management System) related systems of one of the three major Korean telecommunication companies. [Detailed Job Description] 1. Establish HDFS cluster using the vanilla version of Hadoop - Expected integration with 16 servers 2. Build a Data Warehouse based on HDFS - Construct a sys...
I am seeking an experienced data engineer to perform training for my company. We are looking for somebody with advanced knowledge in Python, as well as Big Data technologies such as Hadoop and Spark. The training should last 1-2 months, teaching our team the fundamentals as well as more advanced applications of these skills in data engineering. If you think you have the experience and qualifications to provide this training, I encourage you to submit your proposal.
I am looking for a talented data scientist to help with a project that requires data analysis, machine learning and data visualization. I have medium-sized data sets ready to go, at between 1,000 and 10,000 rows. The data sets are...I'm seeking someone who can make sense of the data and use it to create data visualizations. This person should possess a strong understanding of machine learning and data analysis principles. The successful applicant will be expected to translate data into a visual form that will be easy to understand and communicate to others. Any experience with software such as Python, R, SPSS, Apache, and Hadoop will be greatly beneficial. If you think you have the skills to produce great results and get the job done, then please get in touch and let me know ho...
Quantori is a new company with a long history. We have over twenty years' experience in developing software for the pharmaceutical industry and driving advanced strategies in the world of Big Data revolution. ...- Good written and spoken English skills (upper-intermediate or higher) Nice to have: - Knowledge of web-based frameworks (Flask, Django, FastAPI) - Knowledge of and experience in working with Kubernetes - Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, etc.) - Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) - Good understanding of application architecture principles We offer: - Competitive compensation - Remote work - Flexible working hours - A team with an excellent...
I am looking...looking for an experienced Hadoop engineer to assist with troubleshooting and optimization of our existing Hadoop cluster. Ideally, the hiring candidate will need to demonstrate a high level of proficiency in relevant Hadoop technologies, as well as experience in troubleshooting and optimization. This engineer will be responsible for monitoring the performance of our Hadoop cluster, making adjustments and ensuring the environment is running efficiently. This individual must be able to identify possible inefficiencies and areas for improvement, providing solutions and suggestions for achieving better performance and scalability. This engineer should also be knowledgeable of data processing and analysis, as well as provide the necessary tech support...
Need to fix the missing files and blocks issue on AWS EMR cluster on has corrupt files/jars/blocks.
Quantori is a new company with a long history. We have over twenty years' experience in developing software for the pharmaceutical industry and driving advanced strategies in the world of Big Data revolution. ...- Good written and spoken English skills (upper-intermediate or higher) Nice to have: - Knowledge of web-based frameworks (Flask, Django, FastAPI) - Knowledge of and experience in working with Kubernetes - Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, etc.) - Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) - Good understanding of application architecture principles We offer: - Competitive compensation - Remote work - Flexible working hours - A team with an excellent...
We are an expanding IT company seeking skilled and experienced data engineering professionals to support ou...years of experience in a data engineering role. Desired (but not required) Skills: - Experience with other data processing technologies such as Apache Flink, Apache Beam, or Apache Nifi. - Knowledge of containerization technologies like Docker and Kubernetes. - Familiarity with data visualization tools such as Tableau, Power BI, or Looker. - Understanding of Big Data tools and technologies like Hadoop, MapReduce, etc. If you possess the necessary skills and experience, we invite you to reach out to us with your CV and relevant information. We are excited to collaborate with you and contribute to the continued success and innovation of our IT company in the field of data en...
I am in immediate need of a full-time Java Spark developer for my project. T...developer for my project. The main goal of the project is data analysis, and I require a developer with mid-level experience. The estimated duration of the project is 6+ months, so I need someone who can commit to a long-term engagement. Ideal skills and experience for this project include: • Expertise in Java Spark • Strong background in data analysis • Experience with big data technologies such as Hadoop and Kafka • Knowledge of distributed systems and cloud computing • Ability to work independently and as part of a team • Strong problem-solving and communication skills If you are a mid-level Java Spark developer looking for a long-term project, please apply with your res...
We are seeking a skilled Big Data Engineer to join our team. The ideal candidate will have experience in data analysis, machine learning, modelling, and pipeline warehousing. Knowledge of Hadoop, Spark, Python, Scala, Java, Storm, Kafka, Flint, Kubernetes, and Docker are essential for this project. The expected duration of the project is more than 3 months. If you have a passion for Big Data and are excited about working in a dynamic team environment, we encourage you to apply.
Quantori is a new company with a long history. We have over twenty years' experience in developing software for the pharmaceutical industry and driving advanced strategies in the world of Big Data revolution. ...- Good written and spoken English skills (upper-intermediate or higher) Nice to have: - Knowledge of web-based frameworks (Flask, Django, FastAPI) - Knowledge of and experience in working with Kubernetes - Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, etc.) - Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) - Good understanding of application architecture principles We offer: - Competitive compensation - Remote work - Flexible working hours - A team with an excellent...
EMR ssh tunneling ssh -i <path-to-your-key-pair> -L 8020:localhost:8020 -L 8088:localhost:8088 -L 50070:localhost:50070 -L 10000:localhost:10000 -L 50030:localhost:50030 -L 19888:localhost:19888 -L 8080:localhost:8080 -L 4040:localhost:4040 -ND 8157 hadoop@<master-node-public-dns>
...5-6 years of experience in a data engineering role Desired (but not required) Skills: - Experience with other data processing technologies such as Apache Flink, Apache Beam, or Apache Nifi - Knowledge of containerization technologies like Docker and Kubernetes - Familiarity with data visualization tools such as Tableau, Power BI, or Looker - Understanding of Big Data Tools and technologies like Hadoop, Map Reduce etc If you have the skills and experience we're looking for, we'd love to hear from you! Please reach out to us with your CV and experience information. We are looking forward to working with you to help our IT company continue to thrive and innovate in the world of data engineering. ...
...corresponding files to HDFS - Use google trends to get popularity score per each news category - Using the popularity score to simulate read access for each article on HDFS for generating log files Now we have for example a football article with a popularity score/ read access x, based on this score (how many times it got accessed) we categorize each article as HOT, WARM, COLD. With the default hadoop HDFS replica policy 3x, I need to measure system performance and storage. Then I need to modify existing files of the system to be like this HOT files get replicated 3x, WARM 2x and COLD 1x and if a new file is inserted to the system it should be added as HOT 3x. So we measure the performance and storage here. Then I need a machine learning model trained on the articles dataset to b...
I need a freelancer who can help me enhance our migration techniques from hive data to Neo4j database. Our hive data size is less than 100 GB and we are currently using Hadoop for migration. However, we are facing speed and performance issues with the current migration process. Ideal Skills and Experience: - Strong experience in migration techniques from hive data to Neo4j database - Expertise in Hadoop and other tools or frameworks for migration - Knowledge of optimizing speed and performance in migration process - Familiarity with data consistency and complexity of the process The project requires quick and efficient migration techniques to ensure that our data is migrated smoothly and without any loss. Please apply only if you have the ideal skills and experience to...
Hi We need Freelancer who had experience in " Cornerstone, Mysql,Python and Hadoop " It's a Job Supporting Project you need to connect with our consultant through Zoom Meeting and help him to complete the tasks you need to work on his system(Remotely) by taking mouse controls using Zoom 2hrs/day 5days/week For that we will pay you USD 100$ Month Timing - anytime before 10a.m IST Or anytime after 7p.m IST will be fine
We have the Requirements of Hadoop Trainer for 10 days in roorkee location (Uttrakahnd) . Total participants will be 37 . We will provide the Accommodation & Food to the Trainer & also Travel from their origin point to Roorkee & Back.
...In-depth knowledge of Apache Spark and its various components, including Spark Core, Spark SQL, and Spark Streaming. ✅ Familiarity with data warehousing concepts and technologies, such as dimensional modeling and OLAP. ✅ Expertise in ETL (Extract, Transform, Load) processes, data pipelines, and data integration tools. ✅ Understanding of distributed systems and experience with technologies like Hadoop and HDFS. ✅ Proficiency in programming languages commonly used in data engineering, such as Python or Scala. ✅ Ability to design and optimize data architectures, ensuring scalability, performance, and reliability. ✅ Experience with cloud platforms like AWS, GCP, or Azure, and their respective data engineering services. ✅ Familiarity with SQL and NoSQL databases, and proficiency in wr...
We need to deliver 10 days training on Hadoop to engineering Students. So we need a person who can deliver this training on physical Mode in Roorkee City. Accommodation & Food and travel we will take care
i am are looking for a data engineer to assist us with our data processing. The ideal candidate will have experience working with Spark Scala and strong java coding will be able to assist me with my data processing tasks. The ideal candidate will have experience with hadoop ,Spark Scala and strong Java and in putty environment will be able to assist me with my data processing tasks.