前提需要搭建好hadoop的集群
xml自动换行:关放大阅读展开代码<!-- NodeManager使用内存数,默认8G,修改为4G内存 --> <property> <description>Amount of physical memory, in MB, that can be allocated for containers. If set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically calculated(in case of Windows and Linux). In other cases, the default is 8192MB. </description> <name>yarn.nodemanager.resource.memory-mb</name> <value>4096</value> </property> <!-- 容器最小内存,默认512M --> <property> <description>The minimum allocation for every container request at the RM in MBs. Memory requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have less memory than this value </description> <name>yarn.scheduler.minimum-allocation-mb</name> <value>512</value> </property> <!-- 容器最大内存,默认8G,修改为4G --> <property> <description>The maximum allocation for every container request at the RM in MBs. Memory requests higher than this will throw an InvalidResourceRequestException. </description> <name>yarn.scheduler.maximum-allocation-mb</name> <value>4096</value> </property> <!-- 虚拟内存检查,默认打开,修改为关闭 --> <property> <description>Whether virtual memory limits will be enforced for containers.</description> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property>
sql自动换行:关放大阅读展开代码sudo rpm -e --nodeps mariadb-libs sudo yum remove mysql-libs
shell自动换行:关放大阅读展开代码sudo yum install libaio sudo yum -y install autoconf
plaintext自动换行:关放大阅读展开代码sudo rpm -ivh mysql-community-common-5.7.28-1.el7.x86_64.rpm sudo rpm -ivh mysql-community-libs-5.7.28-1.el7.x86_64.rpm sudo rpm -ivh mysql-community-libs-compat-5.7.28-1.el7.x86_64.rpm sudo rpm -ivh mysql-community-client-5.7.28-1.el7.x86_64.rpm sudo rpm -ivh mysql-community-server-5.7.28-1.el7.x86_64.rpm
shell自动换行:关放大阅读展开代码查看数据存储路径 cat /etc/my.cnf sudo rm -rf /var/lib/mysql/*
plaintext自动换行:关放大阅读展开代码sudo mysqld --initialize --user=mysql
6.查看首次登录mysql需要的密码
shell自动换行:关放大阅读展开代码sudo cat /var/log/mysqld.log 登录mysql并修改密码,设置root用户允许任意ip链接 # 修改密码为aaaaa set password = password("aaaaaa"); # 设置root用户允许任意ip链接 update mysql.user set host='%' where user='root';
xml自动换行:关放大阅读展开代码<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <!-- jdbc连接的URL --> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop102:3306/metastore?useSSL=false&useUnicode=true&characterEncoding=UTF-8</value>e> </property> <!-- jdbc连接的Driver--> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <!-- jdbc连接的username--> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <!-- jdbc连接的password --> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>aaaaaa</value> </property> <!-- Hive默认在HDFS的工作目录 --> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <!-- Hive元数据存储的验证 --> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> <!-- 元数据存储授权 --> <property> <name>hive.metastore.event.db.notification.api.auth</name> <value>false</value> </property> ECHO is off. <!-- 指定metastore服务的地址 --> <property> <name>hive.metastore.uris</name> <value>thrift://hadoop102:9083</value> </property> ECHO is off. <!--Spark依赖位置(注意:端口号8020必须和namenode的端口号一致)--> <property> <name>spark.yarn.jars</name> <value>hdfs://hadoop102:8020/spark-jars/*</value> </property> ECHO is off. <!--Hive执行引擎--> <property> <name>hive.execution.engine</name> <value>spark</value> </property> <!--指定元数据序列化与反序列化器--> <property> <name>metastore.storage.schema.reader.impl</name> <value>org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader</value> </property </configuration>
plaintext自动换行:关放大阅读展开代码2. Hive初始化元数据库
plaintext自动换行:关放大阅读展开代码create database metastore;
plaintext自动换行:关放大阅读展开代码/opt/module/hive/bin/schematool -initSchema -dbType mysql -verbose
初始化结果:
* 登录mysql,创建存储hive元数据的数据库
* 使用hive的工具初始化库
3. 修改元数据库字符集
plaintext自动换行:关放大阅读展开代码mysql> alter table metastore.COLUMNS_V2 modify column COMMENT varchar(256) character set utf8; mysql> alter table metastore.TABLE_PARAMS modify column PARAM_VALUE mediumtext character set utf8;
xml自动换行:关放大阅读展开代码<!-- 配置所有节点的atguigu用户都可作为代理用户 --> <property> <name>hadoop.proxyuser.atguigu.hosts</name> <value>*</value> </property> <!-- 配置atguigu用户能够代理的用户组为任意组--> <property> <name>hadoop.proxyuser.atguigu.groups</name> <value>*</value> </property> <!-- 配置atguigu用户能够代理的用户为任意用户--> <property> <name>hadoop.proxyuser.atguigu.users</name> <value>*</value> </property>
xml自动换行:关放大阅读展开代码<!-- 指定hiveserver2连接的host --> <property> <name>hive.server2.thrift.bind.host</name> <value>hadoop102</value> </property> <!-- 指定hiveserver2连接的端口号 --> <property> <name>hive.server2.thrift.port</name> <value>10000</value> </property> <!-- hiveserver2高可用参数,开启此参数可以提高hiveserver2启动速度 --> <property> <name>hive.server2.active.passive.ha.enable</name> <value>true</value> </property>
启动命令 bin/hive --service hiveserver2
启动命令 hive --service metastore
plaintext自动换行:关放大阅读展开代码
shell自动换行:关放大阅读展开代码#!/bin/bash HIVE_LOG_DIR=$HIVE_HOME/logs if [ ! -d $HIVE_LOG_DIR ] then mkdir -p $HIVE_LOG_DIR fi #检查进程是否运行正常,参数1为进程名,参数2为进程端口 function check_process() { pid=$(ps -ef 2>/dev/null | grep -v grep | grep -i $1 | awk '{print $2}') ppid=$(netstat -nltp 2>/dev/null | grep $2 | awk '{print $7}' | cut -d '/' -f 1) echo $pid [[ "$pid" =~ "$ppid" ]] && [ "$ppid" ] && return 0 || return 1 } function hive_start() { metapid=$(check_process HiveMetastore 9083) cmd="nohup hive --service metastore >$HIVE_LOG_DIR/metastore.log 2>&1 &" [ -z "$metapid" ] && eval $cmd || echo "Metastroe服务已启动" server2pid=$(check_process HiveServer2 10000) cmd="nohup hive --service hiveserver2 >$HIVE_LOG_DIR/hiveServer2.log 2>&1 &" [ -z "$server2pid" ] && eval $cmd || echo "HiveServer2服务已启动" } function hive_stop() { metapid=$(check_process HiveMetastore 9083) [ "$metapid" ] && kill $metapid || echo "Metastore服务未启动" server2pid=$(check_process HiveServer2 10000) [ "$server2pid" ] && kill $server2pid || echo "HiveServer2服务未启动" } case $1 in "start") hive_start ;; "stop") hive_stop ;; "restart") hive_stop sleep 2 hive_start ;; "status") check_process HiveMetastore 9083 >/dev/null && echo "Metastore服务运行正常" || echo "Metastore服务运行异常" check_process HiveServer2 10000 >/dev/null && echo "HiveServer2服务运行正常" || echo "HiveServer2服务运行异常" ;; *) echo Invalid Args! echo 'Usage: '$(basename $0)' start|stop|restart|status' ;; esac
plaintext自动换行:关放大阅读展开代码mv conf/hive-log4j2.properties.template conf/hive-log4j2.properties vim conf/hive-log4j2.properties -- property.hive.log.dir=/opt/module/hive/logs 配置项
plaintext自动换行:关放大阅读展开代码mv conf/hive-env.sh.template conf/hive-env.sh vim conf/hive-env.sh -- export HADOOP_HEAPSIZE=1024 配置项
plaintext自动换行:关放大阅读展开代码vim hive-site.xml <property> <name>hive.cli.print.header</name> <value>true</value> </property> <property> <name>hive.cli.print.current.db</name> <value>true</value> </property>
plaintext自动换行:关放大阅读展开代码
plaintext自动换行:关放大阅读展开代码vim /opt/module/hive/conf/spark-defaults.conf spark.master yarn spark.eventLog.enabled true spark.eventLog.dir hdfs://hadoop102:8020/spark-history spark.executor.memory 2g spark.driver.memory 1g
创建hadoop路径用于存储spark日志:
plaintext自动换行:关放大阅读展开代码hadoop fs -mkdir /spark-history
增加hive-site.xml配置:
xml自动换行:关放大阅读展开代码vim /opt/module/hive/conf/hive-site.xml <!--Spark依赖位置(注意:端口号8020必须和namenode的端口号一致)--> <property> <name>spark.yarn.jars</name> <value>hdfs://hadoop102:8020/spark-jars/*</value> </property> ECHO is off. <!--Hive执行引擎--> <property> <name>hive.execution.engine</name> <value>spark</value> </property> <!--指定元数据序列化与反序列化器--> <property> <name>metastore.storage.schema.reader.impl</name> <value>org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader</value> </property>
__增加Application*****__Master******__资源比例__*****
xml自动换行:关放大阅读展开代码vim capacity-scheduler.xml <property> <name>yarn.scheduler.capacity.maximum-am-resource-percent</name> <value>0.8</value> </property ECHO is off. 修改hadoop配置后需要同步hadoop的所有集群节点才能生效
如果服务器在公网环境(能连接外网),可以不采用集群时间同步,因为服务器会定期和公网时间进行校准;
内网则选择一台服务器为基准,其他节点通过定时任务向基准节点同步时间即可
shell自动换行:关放大阅读展开代码#已经是Asia/Shanghai,则无需设置 [root@xiaoz shadowsocks]# timedatectl status|grep 'Time zone' Time zone: Asia/Shanghai (CST, +0800) ECHO is off. #设置硬件时钟调整为与本地时钟一致 timedatectl set-local-rtc 1 #设置时区为上海 timedatectl set-timezone Asia/Shanghai
plaintext自动换行:关放大阅读展开代码sudo systemctl is-enabled ntpd
shell自动换行:关放大阅读展开代码sudo vim /etc/ntp.conf (1)取消配置并修改配置 #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap == 》 restrict 192.168.61.0 mask 255.255.255.0 nomodify notrap 表示192.168.61.0 ~ 192.168.61.255的节点都可以从该机器同步时间 (2)注释配置: 不使用互联网上的时间 server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst 给上面的内容添加注释 #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst (3)添加配置:当该节点丢失网络连接,依然可以采用本地时间作为时间服务器为集群中的其他节点提供时间同步 server 127.127.1.0 fudge 127.127.1.0 stratum 10 (4)让硬件时间与系统时间一起同步 sudo vim /etc/sysconfig/ntpd 添加配置:SYNC_HWCLOCK=yes (5) 重启ntpd服务并设置开机自启动 sudo systemctl start ntpd sudo systemctl enable ntpd
plaintext自动换行:关放大阅读展开代码(1)关闭ntpd systemctl stop ntpd systemctl disable ntpd (2)创建crontab任务 sudo crontab -e */1 * * * * /usr/sbin/ntpdate 基准节点ip
本文作者:hedeoer
本文链接:
版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!