add
73
README.md
@ -18,7 +18,6 @@
|
||||
<th><img width="50px" src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/flink.png"></th>
|
||||
<th><img width="50px" src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/storm.png"></th>
|
||||
<th><img width="50px" src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/flume.png"></th>
|
||||
<th><img width="50px" src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/oozie.jpg"></th>
|
||||
<th><img width="50px" src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/sqoop.png"></th>
|
||||
<th><img width="50px" src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/azkaban.png"></th>
|
||||
<th><img width="50px" src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/hbase.png"></th>
|
||||
@ -33,19 +32,17 @@
|
||||
<td align="center"><a href="#四flink">Flink</a></td>
|
||||
<td align="center"><a href="#五storm">Storm</a></td>
|
||||
<td align="center"><a href="#六flume">Flume</a></td>
|
||||
<td align="center"><a href="#七oozie">Oozie</a></td>
|
||||
<td align="center"><a href="#八sqoop">Sqoop</a></td>
|
||||
<td align="center"><a href="#九azkaban">Azkaban</a></td>
|
||||
<td align="center"><a href="#十hbase">HBase</a></td>
|
||||
<td align="center"><a href="#十一kafka">Kafka</a></td>
|
||||
<td align="center"><a href="#十二zookeeper">Zookeeper</a></td>
|
||||
<td align="center"><a href="#十三scala">Scala</a></td>
|
||||
<td align="center"><a href="#七sqoop">Sqoop</a></td>
|
||||
<td align="center"><a href="#八azkaban">Azkaban</a></td>
|
||||
<td align="center"><a href="#九hbase">HBase</a></td>
|
||||
<td align="center"><a href="#十kafka">Kafka</a></td>
|
||||
<td align="center"><a href="#十一zookeeper">Zookeeper</a></td>
|
||||
<td align="center"><a href="#十二scala">Scala</a></td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
> 本仓库涉及的所有软件的详细搭建步骤整理至:[Linux下大数据常用软件安装指南](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Linux中大数据常用软件安装指南.md)
|
||||
|
||||
|
||||
@ -55,27 +52,52 @@
|
||||
1. [分布式文件存储系统——HDFS](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Hadoop-HDFS.md)
|
||||
2. [分布式计算框架——MapReduce](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Hadoop-MapReduce.md)
|
||||
3. [集群资源管理器——YARN](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Hadoop-YARN.md)
|
||||
4. Hadoop单机伪集群环境搭建
|
||||
|
||||
## 二、Hive
|
||||
|
||||
1. [数据仓库Hive](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Hive.md)
|
||||
2. Linux环境下Hive的安装部署
|
||||
|
||||
## 三、Spark
|
||||
|
||||
1. RDD详解
|
||||
2. Spark Transformation 和 Action
|
||||
1. Spark简介
|
||||
2. Spark单机版本环境搭建
|
||||
3. RDD详解
|
||||
4. Spark Transformation 和 Action
|
||||
|
||||
## 四、Flink
|
||||
|
||||
TODO
|
||||
|
||||
## 五、Storm
|
||||
|
||||
1. [Storm核心概念详解](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Storm核心概念详解.md)
|
||||
1. Strom简介
|
||||
2. [Storm核心概念详解](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Storm核心概念详解.md)
|
||||
3. Storm单机版本环境搭建
|
||||
4. Storm编程模型
|
||||
|
||||
## 六、Flume
|
||||
## 七、Oozie
|
||||
## 八、Sqoop
|
||||
## 九、Azkaban
|
||||
## 十、HBase
|
||||
|
||||
1. Flume简介
|
||||
2. Linux环境下Flume的安装部署
|
||||
3. Flume的使用
|
||||
4. Flume整合Kafka
|
||||
|
||||
## 七、Sqoop
|
||||
|
||||
1. Sqoop简介
|
||||
|
||||
2. Sqoop的基本使用
|
||||
|
||||
## 八、Azkaban
|
||||
|
||||
1. Azkaban项目简介
|
||||
2. Azkaban3.x编译及部署
|
||||
3. Azkaban Flow 1.0 的使用
|
||||
4. Azkaban Flow 2.0 的使用
|
||||
|
||||
## 九、HBase
|
||||
|
||||
1. [HBase基本环境搭建(Standalone /pseudo-distributed mode)](https://github.com/heibaiying/BigData-Notes/blob/master/notes/installation/Hbase%E5%9F%BA%E6%9C%AC%E7%8E%AF%E5%A2%83%E6%90%AD%E5%BB%BA.md)
|
||||
2. [HBase系统架构及数据结构](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Hbase%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E5%8F%8A%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84.md)
|
||||
@ -85,6 +107,19 @@
|
||||
6. HBase 备份与恢复
|
||||
7. [HBase的SQL中间层——Phoenix](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Hbase%E7%9A%84SQL%E5%B1%82%E2%80%94%E2%80%94Phoenix.md)
|
||||
8. [Spring/Spring Boot 整合 Mybatis + Phoenix](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Spring%2BMybtais%2BPhoenix%E6%95%B4%E5%90%88.md)
|
||||
## 十一、Kafka
|
||||
## 十二、Zookeeper
|
||||
## 十三、Scala
|
||||
## 十、Kafka
|
||||
|
||||
1. Kafka 简介及消息处理过程分析
|
||||
2. 基于Zookeeper搭建Kafka高可用集群
|
||||
3. Kafka 副本机制以及选举原理剖析
|
||||
|
||||
## 十一、Zookeeper
|
||||
|
||||
1. Zookeeper 简介及原理介绍
|
||||
2. Zookeeper 集群搭建Zookeeper
|
||||
3. 分布式锁实现方案Zookeeper
|
||||
4. 集群升级、迁移深入分析 Zookeeper
|
||||
5. Zab协议及选举机制
|
||||
|
||||
## 十二、Scala
|
||||
|
||||
|
215
notes/Azkaban Flow 1.0 的使用.md
Normal file
@ -0,0 +1,215 @@
|
||||
# Azkaban Flow 1.0 的使用
|
||||
|
||||
## 一、简介
|
||||
|
||||
Azkaban提供了人性化的WEB UI界面,使得我们可以通过界面上传配置文件来完成任务的调度。Azkaban有两个重要的概念:
|
||||
|
||||
- **Job**: 你需要执行的调度任务;
|
||||
- **Flow**:一个获取多个Job及它们之间的依赖关系所组成的图表叫做Flow。
|
||||
|
||||
目前 Azkaban 3.x 同时支持 Flow 1.0 和 Flow 2.0,本文主要讲解 Flow 1.0的使用,下一篇文章会讲解Flow 2.0的使用。
|
||||
|
||||
## 二、基本任务调度
|
||||
|
||||
### 2.1 新建项目
|
||||
|
||||
在Azkaban主界面可以创建对应的项目
|
||||
|
||||

|
||||
|
||||
### 2.2 任务配置
|
||||
|
||||
新建任务配置文件`Hello-Azkaban.job`,注意后缀名为`job`,内容如下,这里我们的任务很简单,就是输出一句`'Hello Azkaban!'`
|
||||
|
||||
```shell
|
||||
#command.job
|
||||
type=command
|
||||
command=echo 'Hello Azkaban!'
|
||||
```
|
||||
|
||||
### 2.3 打包上传
|
||||
|
||||
将`Hello-Azkaban.job `打包为`zip`压缩文件
|
||||
|
||||

|
||||
|
||||
通过Web UI 界面上传
|
||||
|
||||

|
||||
|
||||
上传成功后可以看到对应的Flows
|
||||
|
||||

|
||||
|
||||
### 2.4 执行任务
|
||||
|
||||
点击页面上的`Execute Flow`执行任务
|
||||
|
||||

|
||||
|
||||
### 2.5 执行结果
|
||||
|
||||
点击`detail`可以查看到任务的执行日志
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## 三、多任务调度
|
||||
|
||||
### 3.1 依赖配置
|
||||
|
||||
这里假设我们有五个任务(TaskA——TaskE),D任务需要在A,B,C任务执行完成后才能执行,而E任务则需要在D任务执行完成后才能执行。则需要使用`dependencies`属性定义其依赖关系,各任务配置如下:
|
||||
|
||||
**Task-A.job** :
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=echo 'Task A'
|
||||
```
|
||||
|
||||
**Task-B.job** :
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=echo 'Task B'
|
||||
```
|
||||
|
||||
**Task-C.job** :
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=echo 'Task C'
|
||||
```
|
||||
|
||||
**Task-D.job** :
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=echo 'Task D'
|
||||
dependencies=Task-A,Task-B,Task-C
|
||||
```
|
||||
|
||||
**Task-E.job** :
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=echo 'Task E'
|
||||
dependencies=Task-D
|
||||
```
|
||||
|
||||
### 3.2 压缩上传
|
||||
|
||||
压缩后进行上传,这里需要注意的是一个Project只能接收一个压缩包,这里我还沿用上面的Project,默认后面的压缩包会覆盖前面的压缩包
|
||||
|
||||

|
||||
|
||||
### 3.3 依赖关系
|
||||
|
||||
多个任务存在依赖时,默认采用最后一个任务的文件名作为Flow的名称,其依赖关系可以在页面上得以直观的体现
|
||||
|
||||

|
||||
|
||||
### 3.4 执行结果
|
||||
|
||||

|
||||
|
||||
这里说明一下在Flow1.0的情况下,是无法通过一个job文件完成多个任务的配置的,但是Flow 2.0 就很好的解决了这个问题。
|
||||
|
||||
## 四、调度HDFS作业
|
||||
|
||||
步骤与上面的步骤一致,这里已查看HDFS文件列表为例,建议涉及到路径的地方全部采用完整的路径名,配置文件如下:
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=/usr/app/hadoop-2.6.0-cdh5.15.2/bin/hadoop fs -ls /
|
||||
```
|
||||
|
||||
执行结果:
|
||||
|
||||

|
||||
|
||||
## 五、调度MR作业
|
||||
|
||||
MR作业配置:
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=/usr/app/hadoop-2.6.0-cdh5.15.2/bin/hadoop jar /usr/app/hadoop-2.6.0-cdh5.15.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.15.2.jar pi 3 3
|
||||
```
|
||||
|
||||
执行结果:
|
||||
|
||||

|
||||
|
||||
## 六、调度Hive作业
|
||||
|
||||
作业配置:
|
||||
|
||||
```shell
|
||||
type=command
|
||||
command=/usr/app/hive-1.1.0-cdh5.15.2/bin/hive -f 'test.sql'
|
||||
```
|
||||
|
||||
其中`test.sql`内容如下,创建一张雇员表,然后查看其结构:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE IF NOT EXISTS hive;
|
||||
use hive;
|
||||
drop table if exists emp;
|
||||
CREATE TABLE emp(
|
||||
empno int,
|
||||
ename string,
|
||||
job string,
|
||||
mgr int,
|
||||
hiredate string,
|
||||
sal double,
|
||||
comm double,
|
||||
deptno int
|
||||
) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
|
||||
-- 查看emp表的信息
|
||||
desc emp;
|
||||
```
|
||||
|
||||
打包的时候将`job`文件与`sql`文件一并进行打包
|
||||
|
||||

|
||||
|
||||
执行结果如下:
|
||||
|
||||

|
||||
|
||||
## 七、在线修改作业配置
|
||||
|
||||
在测试的时候,我们可能要频繁修改配置,如果每次修改都要重新打包上传这是比较麻烦的,所幸的是Azkaban是支持配置的在线修改的,点击需要修改的Flow,就可以进入详情页面:
|
||||
|
||||

|
||||
|
||||
在详情页面点击`Eidt`按钮可以进入编辑页面
|
||||
|
||||

|
||||
|
||||
在编辑页面可以新增配置或者修改配置
|
||||
|
||||
## 八、可能出现的问题
|
||||
|
||||
如果出现以下异常,多半是因为执行主机内存不足引起,azkaban要求执行主机可用内存必须大于3G才能满足执行任务的条件
|
||||
|
||||
```shell
|
||||
Cannot request memory (Xms 0 kb, Xmx 0 kb) from system for job
|
||||
```
|
||||
|
||||

|
||||
|
||||
如果你的执行主机没办法增大内存,则可以通过配置`commonprivate.properties` 文件关闭内存检查,
|
||||
|
||||
`commonprivate.properties` 文件在安装目录的`/plugins/jobtypes`下。
|
||||
|
||||
关闭内存检查的配置如下:
|
||||
|
||||
```shell
|
||||
memCheck.enabled=false
|
||||
```
|
||||
|
||||
|
||||
|
291
notes/Azkaban Flow 2.0 的使用.md
Normal file
@ -0,0 +1,291 @@
|
||||
# Azkaban Flow 2.0的使用
|
||||
|
||||
## 一、Flow 2.0 简介
|
||||
|
||||
### 1.1 Flow 2.0 的产生
|
||||
|
||||
Azkaban 目前同时支持Flow 1.0和Flow2.0,但是官方文档上更推荐使用2.0,因为Flow 1.0 会在将来的版本被移除。
|
||||
|
||||
>This section covers how to create your Azkaban flows using Azkaban Flow 2.0.Flow 1.0 will be deprecated in the future.
|
||||
|
||||
Flow 2.0设计的主要思想是提供1.0版本没有的流级定义。用户可以将属于给定流的所有.job / .properties文件合并到单个流定义文件中,而不是创建多个.job / .properties文件。配置文件以YAML格式定义,每个项目zip将包含多个流YAML文件和一个项目YAML文件。同时可以在YAML文件中的流内定义流,称为为嵌入流或子流。
|
||||
|
||||
### 1.2 基本结构
|
||||
|
||||
项目zip将包含多个流YAML文件,一个项目YAML文件以及可选库和源代码。Flow YAML文件的基本结构如下:
|
||||
|
||||
+ 每个Flow都在单个YAML文件中定义
|
||||
+ 流文件以流名称命名。如:my-flow-name.flow
|
||||
+ 包含DAG中的所有节点
|
||||
+ 每个节点可以是作业或流程
|
||||
+ 每个节点 可以拥有 name, type, config, dependsOn and nodes sections等属性
|
||||
+ 通过列出dependsOn列表中的父节点来指定节点依赖性
|
||||
+ 包含与流相关的其他配置
|
||||
+ 当前.properties文件中流的所有常见属性都将迁移到每个流YAML文件中的config部分
|
||||
|
||||
官方提供了一个比较完善的配置样例,如下:
|
||||
|
||||
```yaml
|
||||
config:
|
||||
user.to.proxy: azktest
|
||||
param.hadoopOutData: /tmp/wordcounthadoopout
|
||||
param.inData: /tmp/wordcountpigin
|
||||
param.outData: /tmp/wordcountpigout
|
||||
|
||||
# This section defines the list of jobs
|
||||
# A node can be a job or a flow
|
||||
# In this example, all nodes are jobs
|
||||
nodes:
|
||||
# Job definition
|
||||
# The job definition is like a YAMLified version of properties file
|
||||
# with one major difference. All custom properties are now clubbed together
|
||||
# in a config section in the definition.
|
||||
# The first line describes the name of the job
|
||||
- name: AZTest
|
||||
type: noop
|
||||
# The dependsOn section contains the list of parent nodes the current
|
||||
# node depends on
|
||||
dependsOn:
|
||||
- hadoopWC1
|
||||
- NoOpTest1
|
||||
- hive2
|
||||
- java1
|
||||
- jobCommand2
|
||||
|
||||
- name: pigWordCount1
|
||||
type: pig
|
||||
# The config section contains custom arguments or parameters which are
|
||||
# required by the job
|
||||
config:
|
||||
pig.script: src/main/pig/wordCountText.pig
|
||||
|
||||
- name: hadoopWC1
|
||||
type: hadoopJava
|
||||
dependsOn:
|
||||
- pigWordCount1
|
||||
config:
|
||||
classpath: ./*
|
||||
force.output.overwrite: true
|
||||
input.path: ${param.inData}
|
||||
job.class: com.linkedin.wordcount.WordCount
|
||||
main.args: ${param.inData} ${param.hadoopOutData}
|
||||
output.path: ${param.hadoopOutData}
|
||||
|
||||
- name: hive1
|
||||
type: hive
|
||||
config:
|
||||
hive.script: src/main/hive/showdb.q
|
||||
|
||||
- name: NoOpTest1
|
||||
type: noop
|
||||
|
||||
- name: hive2
|
||||
type: hive
|
||||
dependsOn:
|
||||
- hive1
|
||||
config:
|
||||
hive.script: src/main/hive/showTables.sql
|
||||
|
||||
- name: java1
|
||||
type: javaprocess
|
||||
config:
|
||||
Xms: 96M
|
||||
java.class: com.linkedin.foo.HelloJavaProcessJob
|
||||
|
||||
- name: jobCommand1
|
||||
type: command
|
||||
config:
|
||||
command: echo "hello world from job_command_1"
|
||||
|
||||
- name: jobCommand2
|
||||
type: command
|
||||
dependsOn:
|
||||
- jobCommand1
|
||||
config:
|
||||
command: echo "hello world from job_command_2"
|
||||
```
|
||||
|
||||
## 二、YAML语法
|
||||
|
||||
想要进行Flow流的配置,首先需要了解YAML ,YAML 是一种简洁的非标记语言,有着严格的格式要求的,如果你的格式配置失败,上传到Azkaban的时候就会抛出解析异常。
|
||||
|
||||
### 2.1 基本规则
|
||||
|
||||
1. 大小写敏感
|
||||
2. 使用缩进表示层级关系
|
||||
3. 缩进长度没有限制,只要元素对齐就表示这些元素属于一个层级。
|
||||
4. 使用#表示注释
|
||||
5. 字符串默认不用加单双引号,但单引号和双引号都可以使用,双引号不会对特殊字符转义。
|
||||
6. YAML中提供了多种常量结构,包括:整数,浮点数,字符串,NULL,日期,布尔,时间。
|
||||
|
||||
### 2.2 对象的写法
|
||||
|
||||
```yaml
|
||||
# value 与 : 符号之间必须要有一个空格
|
||||
key: value
|
||||
```
|
||||
|
||||
### 2.3 map的写法
|
||||
|
||||
```yaml
|
||||
# 写法一 同一缩进的所有键值对属于一个map
|
||||
key:
|
||||
key1: value1
|
||||
key2: value2
|
||||
|
||||
# 写法二
|
||||
{key1: value1, key2: value2}
|
||||
```
|
||||
|
||||
### 2.3 数组的写法
|
||||
|
||||
```yaml
|
||||
# 写法一 使用一个短横线加一个空格代表一个数组项
|
||||
- a
|
||||
- b
|
||||
- c
|
||||
|
||||
# 写法二
|
||||
[a,b,c]
|
||||
```
|
||||
|
||||
### 2.5 单双引号
|
||||
|
||||
单引号和双引号都可以使用,双引号不会对特殊字符转义。
|
||||
|
||||
```yaml
|
||||
s1: '内容\n字符串'
|
||||
s2: "内容\n字符串"
|
||||
|
||||
转换后:
|
||||
{ s1: '内容\\n字符串', s2: '内容\n字符串' }
|
||||
```
|
||||
|
||||
### 2.6 特殊符号
|
||||
|
||||
`---` YAML可以在同一个文件中,使用`---`表示一个文档的开始。
|
||||
|
||||
### 2.7 配置引用
|
||||
|
||||
在Azkaban中可以使用`${}`引用定义的配置,同时也建议将公共的参数抽取到config中,并使用`${}`进行引用。
|
||||
|
||||
|
||||
|
||||
## 三、简单任务调度
|
||||
|
||||
### 3.1 任务配置
|
||||
|
||||
新建`flow`配置文件
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- name: jobA
|
||||
type: command
|
||||
config:
|
||||
command: echo "Hello Azkaban Flow 2.0."
|
||||
```
|
||||
|
||||
在当前的版本中,由于Azkaban是同时支持Flow 1.0 和 Flow 2.0的,如果你想让Azkaban知道你是希望以2.0方式运行,则需要新建一个`project`文件,指明是使用的Flow 2.0
|
||||
|
||||
```shell
|
||||
azkaban-flow-version: 2.0
|
||||
```
|
||||
|
||||
### 3.2 打包上传
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
### 3.3 执行结果
|
||||
|
||||
由于在1.0 版本中已经介绍过web ui的使用,这里就不再赘述,对于1.0和2.0版本,只有配置的方式是不同的,其他上传执行的操作方式都是相同的。执行结果如下:
|
||||
|
||||

|
||||
|
||||
## 四、多任务调度
|
||||
|
||||
和1.0给的案例一样,这里假设我们有五个任务(jobA——jobE),D任务需要在A,B,C任务执行完成后才能执行,而E任务则需要在D任务执行完成后才能执行。`Flow`配置如下。可以看到在1.0中我们需要分别定义五个配置文件,而在2.0中我们只需要一个配置文件即可完成配置。
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- name: jobE
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job E"
|
||||
# jobE depends on jobD
|
||||
dependsOn:
|
||||
- jobD
|
||||
|
||||
- name: jobD
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job D"
|
||||
# jobD depends on jobA、jobB、jobC
|
||||
dependsOn:
|
||||
- jobA
|
||||
- jobB
|
||||
- jobC
|
||||
|
||||
- name: jobA
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job A"
|
||||
|
||||
- name: jobB
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job B"
|
||||
|
||||
- name: jobC
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job C"
|
||||
```
|
||||
|
||||
## 五、内嵌流
|
||||
|
||||
Flow2.0 支持在一个Flow中定义另一个Flow,称为内嵌流或者子流。这里给出一个内嵌流的示例,其`Flow`配置如下:
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- name: jobC
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job C"
|
||||
dependsOn:
|
||||
- embedded_flow
|
||||
|
||||
- name: embedded_flow
|
||||
type: flow
|
||||
config:
|
||||
prop: value
|
||||
nodes:
|
||||
- name: jobB
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job B"
|
||||
dependsOn:
|
||||
- jobA
|
||||
|
||||
- name: jobA
|
||||
type: command
|
||||
config:
|
||||
command: echo "This is job A"
|
||||
```
|
||||
|
||||
内嵌流的DAG图如下:
|
||||
|
||||

|
||||
|
||||
执行情况如下:
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
## 参考资料
|
||||
|
||||
1. [Azkaban Flow 2.0 Design](https://github.com/azkaban/azkaban/wiki/Azkaban-Flow-2.0-Design)
|
||||
2. [Getting started with Azkaban Flow 2.0](https://github.com/azkaban/azkaban/wiki/Getting-started-with-Azkaban-Flow-2.0)
|
||||
|
1
notes/Azkaban的使用.md
Normal file
@ -0,0 +1 @@
|
||||
##
|
@ -47,7 +47,7 @@ export PATH=$FLUME_HOME/bin:$PATH
|
||||
# cp flume-env.sh.template flume-env.sh
|
||||
```
|
||||
|
||||
修改安装目录下的`flume-env.sh`,指定JDK的安装路径:
|
||||
修改`flume-env.sh`,指定JDK的安装路径:
|
||||
|
||||
```shell
|
||||
# Enviroment variables can be set here.
|
||||
|
118
notes/installation/Linux环境下Hive的安装部署.md
Normal file
@ -0,0 +1,118 @@
|
||||
# Linux环境下Hive的安装
|
||||
|
||||
> Hive 版本 : hive-1.1.0-cdh5.15.2.tar.gz
|
||||
>
|
||||
> 系统环境:Centos 7.6
|
||||
|
||||
### 1.1 下载并解压
|
||||
|
||||
下载所需版本的Hive,这里我下载的是`cdh5.15.2`版本的Hive。下载地址为:http://archive.cloudera.com/cdh5/cdh/5/
|
||||
|
||||
```shell
|
||||
# 下载后进行解压
|
||||
tar -zxvf hive-1.1.0-cdh5.15.2.tar.gz
|
||||
```
|
||||
|
||||
### 1.2 配置环境变量
|
||||
|
||||
```shell
|
||||
# vim /etc/profile
|
||||
```
|
||||
|
||||
添加环境变量:
|
||||
|
||||
```shell
|
||||
export HIVE_HOME=/usr/app/hive-1.1.0-cdh5.15.2
|
||||
export PATH=$HIVE_HOME/bin:$PATH
|
||||
```
|
||||
|
||||
使得配置的环境变量立即生效:
|
||||
|
||||
```shell
|
||||
# source /etc/profile
|
||||
```
|
||||
|
||||
### 1.3 修改配置
|
||||
|
||||
**1. hive-env.sh**
|
||||
|
||||
进入安装目录下的`conf/`目录,拷贝Hive的环境配置模板`flume-env.sh.template`
|
||||
|
||||
```shell
|
||||
cp hive-env.sh.template hive-env.sh
|
||||
```
|
||||
|
||||
修改`hive-env.sh`,指定Hadoop的安装路径:
|
||||
|
||||
```shell
|
||||
HADOOP_HOME=/usr/app/hadoop-2.6.0-cdh5.15.2
|
||||
```
|
||||
|
||||
**2. hive-site.xml**
|
||||
|
||||
新建hive-site.xml 文件,内容如下,主要是配置存放元数据的MySQL数据库的地址、驱动、用户名和密码等信息:
|
||||
|
||||
```xml
|
||||
<?xml version="1.0"?>
|
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
|
||||
|
||||
<configuration>
|
||||
<property>
|
||||
<name>javax.jdo.option.ConnectionURL</name>
|
||||
<value>jdbc:mysql://hadoop001:3306/hadoop_hive?createDatabaseIfNotExist=true</value>
|
||||
</property>
|
||||
|
||||
<property>
|
||||
<name>javax.jdo.option.ConnectionDriverName</name>
|
||||
<value>com.mysql.jdbc.Driver</value>
|
||||
</property>
|
||||
|
||||
<property>
|
||||
<name>javax.jdo.option.ConnectionUserName</name>
|
||||
<value>root</value>
|
||||
</property>
|
||||
|
||||
<property>
|
||||
<name>javax.jdo.option.ConnectionPassword</name>
|
||||
<value>root</value>
|
||||
</property>
|
||||
|
||||
</configuration>
|
||||
```
|
||||
|
||||
|
||||
|
||||
### 1.4 拷贝数据库驱动
|
||||
|
||||
将MySQL驱动拷贝到Hive安装目录的`lib`目录下, MySQL驱动的下载地址为https://dev.mysql.com/downloads/connector/j/ , 在本仓库的resources目录下我也上传了一份,有需要的可以自行下载。
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
### 1.5 初始化元数据库
|
||||
|
||||
+ 当使用的 hive 是1.x版本时,可以不进行初始化操作,Hive会在第一次启动的时候会自动进行初始化,但不会生成所有的元数据信息表,只会初始化必要的一部分,在之后的使用中用到其余表时会自动创建;
|
||||
|
||||
+ 当使用的 hive 是2.x版本时,必须手动初始化元数据库。初始化命令:
|
||||
|
||||
```shell
|
||||
# schematool 命令在安装目录的bin目录下,由于上面已经配置过环境变量,在任意位置执行即可
|
||||
schematool -dbType mysql -initSchema
|
||||
```
|
||||
|
||||
本用例使用的CDH版本是`hive-1.1.0-cdh5.15.2.tar.gz`,对应`Hive 1.1.0` 版本,可以跳过这一步。
|
||||
|
||||
### 1.6 启动
|
||||
|
||||
由于已经将Hive的bin目录配置到环境变量,直接使用以下命令启动,成功进入交互式命令行后执行`show databases`命令,无异常则代表搭建成功。
|
||||
|
||||
```shell
|
||||
# Hive
|
||||
```
|
||||
|
||||

|
||||
|
||||
在Mysql中也能看到Hive创建的库和存放元数据信息的表
|
||||
|
||||

|
BIN
pictures/azkaban-click-edit.png
Normal file
After Width: | Height: | Size: 23 KiB |
BIN
pictures/azkaban-create-project.png
Normal file
After Width: | Height: | Size: 30 KiB |
BIN
pictures/azkaban-dependencies.png
Normal file
After Width: | Height: | Size: 29 KiB |
BIN
pictures/azkaban-edit.png
Normal file
After Width: | Height: | Size: 21 KiB |
BIN
pictures/azkaban-embeded-flow.png
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
pictures/azkaban-embeded-success.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
pictures/azkaban-execute.png
Normal file
After Width: | Height: | Size: 29 KiB |
BIN
pictures/azkaban-flows.png
Normal file
After Width: | Height: | Size: 38 KiB |
BIN
pictures/azkaban-hdfs.png
Normal file
After Width: | Height: | Size: 77 KiB |
BIN
pictures/azkaban-hive-result.png
Normal file
After Width: | Height: | Size: 59 KiB |
BIN
pictures/azkaban-hive.png
Normal file
After Width: | Height: | Size: 11 KiB |
BIN
pictures/azkaban-log.png
Normal file
After Width: | Height: | Size: 71 KiB |
BIN
pictures/azkaban-memory.png
Normal file
After Width: | Height: | Size: 47 KiB |
BIN
pictures/azkaban-mr.png
Normal file
After Width: | Height: | Size: 80 KiB |
BIN
pictures/azkaban-project-edit.png
Normal file
After Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 28 KiB |
BIN
pictures/azkaban-simle-result.png
Normal file
After Width: | Height: | Size: 57 KiB |
BIN
pictures/azkaban-simple.png
Normal file
After Width: | Height: | Size: 9.6 KiB |
BIN
pictures/azkaban-successed.png
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
pictures/azkaban-task-abcde-zip.png
Normal file
After Width: | Height: | Size: 13 KiB |
BIN
pictures/azkaban-task-abcde.png
Normal file
After Width: | Height: | Size: 48 KiB |
BIN
pictures/azkaban-upload.png
Normal file
After Width: | Height: | Size: 45 KiB |
BIN
pictures/azkaban-zip.png
Normal file
After Width: | Height: | Size: 8.5 KiB |
BIN
pictures/hive-install-2.png
Normal file
After Width: | Height: | Size: 42 KiB |
BIN
pictures/hive-mysql-tables.png
Normal file
After Width: | Height: | Size: 25 KiB |
BIN
pictures/hive-mysql.png
Normal file
After Width: | Height: | Size: 7.8 KiB |