Hadoop环境配置之hive环境配置详解

网友投稿 895 2022-11-16

Hadoop环境配置之hive环境配置详解

Hadoop环境配置之hive环境配置详解

1、将-的hive压缩包拉到/opt/software/文件夹下

安装包版本:apache-hive-3.1.2-bin.tar.gz

2、将安装包解压到/opt/module/文件夹中,命令:

cd /opt/software/

tar -zxvf 压缩包名 -C /opt/module/

3、修改系统环境变量,命令:

vi /etc/profile

在编辑面板中添加如下代码

export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin

export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin

4、重启环境配置,命令:

source /etc/profile

5、修改hive环境变量

cd /opt/module/apache-hive-3.1.2-bin/bin/

①配置hive-config.sh文件

vi hive-config.sh

在编辑面板中添加如下代码:

export java_HOME=/opt/module/jdk1.8.0_212

export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin

export HADOOP_HOME=/opt/module/hadoop-3.2.0

export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf

6、拷贝hive配置文件,命令:

cd /opt/module/apache-hive-3.1.2-bin/conf/

cp hive-default.xml.template hive-site.xml

7、修改hive配置文件,找到对应位置按一下代码进行修改:

vi hive-site.xml

javax.jdo.option.ConnectionDriverName

com.mysql.cj.jdbc.Driver

Driver class name for a JDBC metastore

javax.jdo.option.ConnectionUserName

root

Username to use against metastore database

javax.jdo.option.ConnectionPassword

123456

# 自定义密码

password to use against metastore database

javax.jdo.option.ConnectionURL

jdbc:mysql://192.168.1.100:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT

JDBC connect string for a JDBC metastore.

To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.

For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.

datanucleus.schema.autoCreateAll

true

Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.

hive.metastore.schema.verification

false

Enforce metastore schema version consistency.

True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic

schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures

proper metastore schema migration. (Default)

False: Warn if the version information stored in metastore doesn't match with one from ihttp://n Hive jars.

hive.exec.local.scratchdir

/opt/module/apache-hive-3.1.2-bin/tmp/${user.name}

Local scratch space for Hive jobs

system:java.io.tmpdir

/opt/module/apache-hive-3.1.2-bin/iotmp

hive.downloaded.resources.dir

/opt/module/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources

Temporary local directory for added resources in the remote file system.

hive.querylog.location

/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}

Location of Hive run time structured log file

hive.server2.logging.operation.log.location

/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs

Top level directory where operation logs are stored if logging functionality is enabled

hive.metastore.db.type

mysql

Expects one of [derby, oracle, mysql, mssql, postgres].

Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.

hive.cli.print.current.db

true

Whether to include the current database in the Hive prompt.

hive.cli.print.header

true

Whether to print the names of the columns in query output.

hive.metastore.warehouse.dir

/opt/hive/warehouse

location of default database for the warehouse

8、上传mysql驱动包到/opt/module/apache-hive-3.1.2-bin/lib/文件夹下

驱动包:mysql-conneWEMbMqdFctor-java-8.0.15.zip,解压后从里面获取jar包

9、进入数据库,在数据库中新建名为hive的数据库,确保 mysql数据库中有名称为hive的数据库

mysql> create database hive;

10、初始化元数据库,命令:

schematool -dbType mysql -initSchema

11、群起,命令:

start-all.sh Hadoop100上

start-yarn.sh Hadoop101上

12、启动hive,命令:

hive

13、检测是否启动成功,命令:

show databases;

出现各类数据库,则启动成功

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:随机森林_处理不均衡数据
下一篇:保存加载keras模型
相关文章

 发表评论

暂时没有评论,来抢沙发吧~