zookeeper-3.4.14 Installation Document

zookeeper installation 1. Upload installation package to / opt directory 2. Decompression: tar -zxf /opt/zookeeper-3.4.14.tar.gz -C ./ 3. Configure environment variables: vim /etc/profile export ZK_HOME=/opt/zookeeper-3.4.14 export PATH=$Z ...

Added by cage on Wed, 21 Aug 2019 09:48:25 +0300

Single-machine distributed, pseudo-distributed and fully distributed hadoop

1. Haoop Single Computer Distribution 1. Establish hadoop user and set hadoop user password [root@server1 ~]# ls hadoop-3.0.3.tar.gz jdk-8u181-linux-x64.tar.gz [root@server1 ~]# useradd hadoop [root@server1 ~]# id hadoop uid=1000(hadoop) gid=1000(ha ...

Added by tidou on Sat, 17 Aug 2019 11:21:07 +0300

Arrays, mappings, tuples, collections of spark notes

1.1. Array 1.1.1. Fixed-length and variable-length arrays (1) Definition format of fixed-length array: val arr=new ArrayT (2) Variable-length array definition format: val arr = ArrayBuffer[T]() Note the need for a guide: import scala. collection. mutable. Array Buffer package cn.itcast.scala import scala.collection.mutable.ArrayBuffer object ...

Added by Avi on Tue, 13 Aug 2019 10:14:47 +0300

4 IDEA Environment Application

Chapter 4 IDEA Environmental Applications The spark shell only uses a lot when testing and validating our programs. In production environments, programs are usually written in the IDE, then packaged into jar packages and submitted to the cluster. The most common use is to create a Maven project that uses Maven to manage the dependencies of j ...

Added by amal.barman on Sat, 03 Aug 2019 21:40:25 +0300

CDH5.16: Offline Installation Deployment

Articles Catalogue 1. Preparations 1.1 Off-line deployment is mainly divided into three parts 1.2 Planning 1.3 Download Source 2. Cluster Node Initialization 2.1 Ajiyun purchases three ECS hosts and pays for them by volume 2.2 Windows Configuration hosts File 2.3 Linux configuration hosts file 2.4 ...

Added by EnDee321 on Mon, 29 Jul 2019 15:34:51 +0300

Preparations for Hadoop Cluster Building-01

The whole process of building hadoop cluster includes preparation in advance Install zookeeper and configure the environment Compile, install and start hadoop Install HDFS to manage namenode and dataname to manage cluster hard disk resources Install and start yarn to build MapReduce to manage cpu and memory resources 01 Preparation ...

Added by gasxtreme on Sun, 21 Jul 2019 14:34:55 +0300

Local mode, pseudo-distributed cluster, distributed cluster and high-availability environment for HDFS systems for Hadoop 2.5.2

1. Prepare the environment (JDK and Hadoop) $ tar -zxf hadoop-2.5.2.tar.gz -C /opt/app/ // Uninstall java from Linux, install jdk1.8, hive only supports version 1.7 or above $ rpm -qa|grep java $ rpm -e --nodeps java Various documents 2. Environment Configuration Configure your environment with etc/hadoop/hadoop-env.sh in ...

Added by jeethau on Wed, 17 Jul 2019 20:20:54 +0300

Hive Installation & First Experience

Download & Unzip Download Hive 1.2.1 from this address https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz Then use the following command to extract it to the specified directory: tar -zxvf apache-hive-1.2.1-bin.tar.gz -C /root/apps/ Then change the name with the following command: mv apache-hive- ...

Added by jimmyhumbled on Mon, 15 Jul 2019 23:25:59 +0300

Deployment of Hadoop 2.7.3 distributed cluster memo on docker 1.7.03.1 single machine

Deployment of Hadoop 2.7.3 distributed cluster memo on docker 1.7.03.1 single machine [TOC] statement All the articles are my technical notes. Please indicate where they came from when they were reproduced. https://segmentfault.com/u/yzwall 0 docker version and hadoop Version Description PC: ubuntu 16.04.1 LTS Docker version: 17.03.1-ce OS/Arc ...

Added by flowingwindrider on Fri, 28 Jun 2019 02:42:15 +0300

CentOS 7 cluster deployment Hadoop 2.7.3

I. Explanation _ 2. Installation of Virtual Machine (CentOS 7 is used in this paper) _1. In this paper, CetnOS7 installation and deployment is adopted.   2. jdk1.8   3. Hadoop 2.7.3 hostname ip master 10.10.1.3 slave1 10.10.1.4 3. Install jdk, configure firewall, SSH Modify hostname There is a big difference ...

Added by Simply Me on Mon, 24 Jun 2019 00:39:40 +0300