diff --git a/notes/Azkaban_Flow_1.0_的使用.md b/notes/Azkaban_Flow_1.0_的使用.md
index 0f1b05b..a4b61f4 100644
--- a/notes/Azkaban_Flow_1.0_的使用.md
+++ b/notes/Azkaban_Flow_1.0_的使用.md
@@ -223,3 +223,6 @@ memCheck.enabled=false
+
+
+
\ No newline at end of file
diff --git a/notes/Azkaban_Flow_2.0_的使用.md b/notes/Azkaban_Flow_2.0_的使用.md
index ff30338..51e4719 100644
--- a/notes/Azkaban_Flow_2.0_的使用.md
+++ b/notes/Azkaban_Flow_2.0_的使用.md
@@ -294,3 +294,6 @@ nodes:
1. [Azkaban Flow 2.0 Design](https://github.com/azkaban/azkaban/wiki/Azkaban-Flow-2.0-Design)
2. [Getting started with Azkaban Flow 2.0](https://github.com/azkaban/azkaban/wiki/Getting-started-with-Azkaban-Flow-2.0)
+
+
+
\ No newline at end of file
diff --git a/notes/Azkaban简介.md b/notes/Azkaban简介.md
index 668fd69..625fe3d 100644
--- a/notes/Azkaban简介.md
+++ b/notes/Azkaban简介.md
@@ -74,3 +74,6 @@ Azkaban 和 Oozie 都是目前使用最为广泛的工作流调度程序,其
+ **配置方面**:Azkaban Flow 1.0 基于 Properties 文件来定义工作流,这个时候的限制可能会多一点。但是在 Flow 2.0 就支持了 YARM。YARM 语法更加灵活简单,著名的微服务框架 Spring Boot 就采用的 YAML 代替了繁重的 XML。
+
+
+
\ No newline at end of file
diff --git a/notes/Flink_Data_Sink.md b/notes/Flink_Data_Sink.md
index f77559c..c1ee990 100644
--- a/notes/Flink_Data_Sink.md
+++ b/notes/Flink_Data_Sink.md
@@ -266,3 +266,6 @@ env.execute();
2. Streaming Connectors:https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/connectors/index.html
3. Apache Kafka Connector: https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/connectors/kafka.html
+
+
+
\ No newline at end of file
diff --git a/notes/Flink_Data_Source.md b/notes/Flink_Data_Source.md
index a8d065f..ec1bb86 100644
--- a/notes/Flink_Data_Source.md
+++ b/notes/Flink_Data_Source.md
@@ -282,3 +282,6 @@ bin/kafka-console-producer.sh --broker-list hadoop001:9092 --topic flink-stream-
1. data-sources:https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/datastream_api.html#data-sources
2. Streaming Connectors:https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/connectors/index.html
3. Apache Kafka Connector: https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/connectors/kafka.html
+
+
+
\ No newline at end of file
diff --git a/notes/Flink_Data_Transformation.md b/notes/Flink_Data_Transformation.md
index 3f5a1c8..9478162 100644
--- a/notes/Flink_Data_Transformation.md
+++ b/notes/Flink_Data_Transformation.md
@@ -309,3 +309,6 @@ someStream.filter(...).slotSharingGroup("slotSharingGroupName");
## 参考资料
Flink Operators: https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/stream/operators/
+
+
+
\ No newline at end of file
diff --git a/notes/Flink_Windows.md b/notes/Flink_Windows.md
index 816a6ed..7615f7f 100644
--- a/notes/Flink_Windows.md
+++ b/notes/Flink_Windows.md
@@ -126,3 +126,6 @@ public WindowedStream countWindow(long size, long slide) {
## 参考资料
Flink Windows: https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/stream/operators/windows.html
+
+
+
\ No newline at end of file
diff --git a/notes/Flink开发环境搭建.md b/notes/Flink开发环境搭建.md
index a933e04..1b319cf 100644
--- a/notes/Flink开发环境搭建.md
+++ b/notes/Flink开发环境搭建.md
@@ -302,3 +302,6 @@ Flink 大多数版本都提供有 Scala 2.11 和 Scala 2.12 两个版本的安
+
+
+
\ No newline at end of file
diff --git a/notes/Flink核心概念综述.md b/notes/Flink核心概念综述.md
index 397ecc7..c4ae5a6 100644
--- a/notes/Flink核心概念综述.md
+++ b/notes/Flink核心概念综述.md
@@ -171,3 +171,6 @@ Flink 的所有组件都基于 Actor System 来进行通讯。Actor system是多
+
+
+
\ No newline at end of file
diff --git a/notes/Flink状态管理与检查点机制.md b/notes/Flink状态管理与检查点机制.md
index e5d3da2..d678daa 100644
--- a/notes/Flink状态管理与检查点机制.md
+++ b/notes/Flink状态管理与检查点机制.md
@@ -368,3 +368,6 @@ state.checkpoints.dir: hdfs://namenode:40010/flink/checkpoints
+
+
+
\ No newline at end of file
diff --git a/notes/Flume整合Kafka.md b/notes/Flume整合Kafka.md
index 9cc34d1..c1f80dd 100644
--- a/notes/Flume整合Kafka.md
+++ b/notes/Flume整合Kafka.md
@@ -114,3 +114,6 @@ flume-ng agent \
可以看到 `flume-kafka` 主题的消费端已经收到了对应的消息:
+
+
+
\ No newline at end of file
diff --git a/notes/Flume简介及基本使用.md b/notes/Flume简介及基本使用.md
index f5f752e..f98c804 100644
--- a/notes/Flume简介及基本使用.md
+++ b/notes/Flume简介及基本使用.md
@@ -373,3 +373,6 @@ flume-ng agent \
可以看到已经从 8888 端口监听到内容,并成功输出到控制台:
+
+
+
\ No newline at end of file
diff --git a/notes/HDFS-Java-API.md b/notes/HDFS-Java-API.md
index f5ec70d..5d28f1e 100644
--- a/notes/HDFS-Java-API.md
+++ b/notes/HDFS-Java-API.md
@@ -386,3 +386,6 @@ public void getFileBlockLocations() throws Exception {
**以上所有测试用例下载地址**:[HDFS Java API](https://github.com/heibaiying/BigData-Notes/tree/master/code/Hadoop/hdfs-java-api)
+
+
+
\ No newline at end of file
diff --git a/notes/HDFS常用Shell命令.md b/notes/HDFS常用Shell命令.md
index 933eceb..d6cf787 100644
--- a/notes/HDFS常用Shell命令.md
+++ b/notes/HDFS常用Shell命令.md
@@ -139,3 +139,6 @@ hadoop fs -test - [defsz] URI
# 示例
hadoop fs -test -e filename
```
+
+
+
\ No newline at end of file
diff --git a/notes/Hadoop-HDFS.md b/notes/Hadoop-HDFS.md
index 5e88e95..cfa57b0 100644
--- a/notes/Hadoop-HDFS.md
+++ b/notes/Hadoop-HDFS.md
@@ -174,3 +174,6 @@ HDFS 具有良好的跨平台移植性,这使得其他大数据计算框架都
2. Tom White . hadoop 权威指南 [M] . 清华大学出版社 . 2017.
3. [翻译经典 HDFS 原理讲解漫画](https://blog.csdn.net/hudiefenmu/article/details/37655491)
+
+
+
\ No newline at end of file
diff --git a/notes/Hadoop-MapReduce.md b/notes/Hadoop-MapReduce.md
index 616bc9e..2163a09 100644
--- a/notes/Hadoop-MapReduce.md
+++ b/notes/Hadoop-MapReduce.md
@@ -382,3 +382,6 @@ job.setNumReduceTasks(WordCountDataUtils.WORD_LIST.size());
+
+
+
\ No newline at end of file
diff --git a/notes/Hadoop-YARN.md b/notes/Hadoop-YARN.md
index cd7cb90..10a8755 100644
--- a/notes/Hadoop-YARN.md
+++ b/notes/Hadoop-YARN.md
@@ -126,3 +126,6 @@ YARN 中的任务将其进度和状态 (包括 counter) 返回给应用管理器
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase_Java_API.md b/notes/Hbase_Java_API.md
index c75522e..66dfef4 100644
--- a/notes/Hbase_Java_API.md
+++ b/notes/Hbase_Java_API.md
@@ -759,3 +759,6 @@ connection = ConnectionFactory.createConnection(config);
1. [连接 HBase 的正确姿势](https://yq.aliyun.com/articles/581702?spm=a2c4e.11157919.spm-cont-list.1.146c27aeFxoMsN%20%E8%BF%9E%E6%8E%A5HBase%E7%9A%84%E6%AD%A3%E7%A1%AE%E5%A7%BF%E5%8A%BF)
2. [Apache HBase ™ Reference Guide](http://hbase.apache.org/book.htm)
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase_Shell.md b/notes/Hbase_Shell.md
index d9417f2..6404c31 100644
--- a/notes/Hbase_Shell.md
+++ b/notes/Hbase_Shell.md
@@ -277,3 +277,6 @@ scan 'Student', FILTER=>"PrefixFilter('wr')"
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase协处理器详解.md b/notes/Hbase协处理器详解.md
index eb55627..50bdae7 100644
--- a/notes/Hbase协处理器详解.md
+++ b/notes/Hbase协处理器详解.md
@@ -488,3 +488,6 @@ hbase > get 'magazine','rowkey1','article:content'
1. [Apache HBase Coprocessors](http://hbase.apache.org/book.html#cp)
2. [Apache HBase Coprocessor Introduction](https://blogs.apache.org/hbase/entry/coprocessor_introduction)
3. [HBase 高階知識](https://www.itread01.com/content/1546245908.html)
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase容灾与备份.md b/notes/Hbase容灾与备份.md
index 2a3d150..bdebc1e 100644
--- a/notes/Hbase容灾与备份.md
+++ b/notes/Hbase容灾与备份.md
@@ -194,3 +194,6 @@ hbase> restore_snapshot '快照名'
1. [Online Apache HBase Backups with CopyTable](https://blog.cloudera.com/blog/2012/06/online-hbase-backups-with-copytable-2/)
2. [Apache HBase ™ Reference Guide](http://hbase.apache.org/book.htm)
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase的SQL中间层_Phoenix.md b/notes/Hbase的SQL中间层_Phoenix.md
index 9c4f004..7b64e3b 100644
--- a/notes/Hbase的SQL中间层_Phoenix.md
+++ b/notes/Hbase的SQL中间层_Phoenix.md
@@ -239,3 +239,6 @@ public class PhoenixJavaApi {
# 参考资料
1. http://phoenix.apache.org/
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase简介.md b/notes/Hbase简介.md
index 9a2f472..b406d92 100644
--- a/notes/Hbase简介.md
+++ b/notes/Hbase简介.md
@@ -86,3 +86,6 @@ Hbase 的表具有以下特点:
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase系统架构及数据结构.md b/notes/Hbase系统架构及数据结构.md
index eb66f4f..3d61164 100644
--- a/notes/Hbase系统架构及数据结构.md
+++ b/notes/Hbase系统架构及数据结构.md
@@ -220,3 +220,6 @@ HBase 系统遵循 Master/Salve 架构,由三种不同类型的组件组成:
+
+
+
\ No newline at end of file
diff --git a/notes/Hbase过滤器详解.md b/notes/Hbase过滤器详解.md
index 69188cb..614d04e 100644
--- a/notes/Hbase过滤器详解.md
+++ b/notes/Hbase过滤器详解.md
@@ -443,3 +443,6 @@ scan.setFilter(filterList);
## 参考资料
[HBase: The Definitive Guide _> Chapter 4. Client API: Advanced Features](https://www.oreilly.com/library/view/hbase-the-definitive/9781449314682/ch04.html)
+
+
+
\ No newline at end of file
diff --git a/notes/HiveCLI和Beeline命令行的基本使用.md b/notes/HiveCLI和Beeline命令行的基本使用.md
index cdbd8d7..38ad992 100644
--- a/notes/HiveCLI和Beeline命令行的基本使用.md
+++ b/notes/HiveCLI和Beeline命令行的基本使用.md
@@ -277,3 +277,6 @@ Hive 可选的配置参数非常多,在用到时查阅官方文档即可[Admin
1. [HiveServer2 Clients](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients)
2. [LanguageManual Cli](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli)
3. [AdminManual Configuration](https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration)
+
+
+
\ No newline at end of file
diff --git a/notes/Hive分区表和分桶表.md b/notes/Hive分区表和分桶表.md
index d552575..e1d7924 100644
--- a/notes/Hive分区表和分桶表.md
+++ b/notes/Hive分区表和分桶表.md
@@ -166,3 +166,6 @@ SELECT * FROM page_view WHERE dt='2009-02-25';
## 参考资料
1. [LanguageManual DDL BucketedTables](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL+BucketedTables)
+
+
+
\ No newline at end of file
diff --git a/notes/Hive常用DDL操作.md b/notes/Hive常用DDL操作.md
index dc8f387..299ae80 100644
--- a/notes/Hive常用DDL操作.md
+++ b/notes/Hive常用DDL操作.md
@@ -448,3 +448,6 @@ SHOW CREATE TABLE ([db_name.]table_name|view_name);
## 参考资料
[LanguageManual DDL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
+
+
+
\ No newline at end of file
diff --git a/notes/Hive常用DML操作.md b/notes/Hive常用DML操作.md
index 6e633fa..28af9be 100644
--- a/notes/Hive常用DML操作.md
+++ b/notes/Hive常用DML操作.md
@@ -327,3 +327,6 @@ SELECT * FROM emp_ptn;
1. [Hive Transactions](https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions)
2. [Hive Data Manipulation Language](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML)
+
+
+
\ No newline at end of file
diff --git a/notes/Hive数据查询详解.md b/notes/Hive数据查询详解.md
index 33d7838..12ca4cd 100644
--- a/notes/Hive数据查询详解.md
+++ b/notes/Hive数据查询详解.md
@@ -392,3 +392,6 @@ SET hive.exec.mode.local.auto=true;
2. [LanguageManual Joins](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins)
3. [LanguageManual GroupBy](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+GroupBy)
4. [LanguageManual SortBy](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SortBy)
+
+
+
\ No newline at end of file
diff --git a/notes/Hive简介及核心概念.md b/notes/Hive简介及核心概念.md
index 284159b..50915c7 100644
--- a/notes/Hive简介及核心概念.md
+++ b/notes/Hive简介及核心概念.md
@@ -200,3 +200,6 @@ CREATE TABLE page_view(viewTime INT, userid BIGINT)
3. [LanguageManual DDL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
4. [LanguageManual Types](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types)
5. [Managed vs. External Tables](https://cwiki.apache.org/confluence/display/Hive/Managed+vs.+External+Tables)
+
+
+
\ No newline at end of file
diff --git a/notes/Hive视图和索引.md b/notes/Hive视图和索引.md
index 89aa9be..a269c41 100644
--- a/notes/Hive视图和索引.md
+++ b/notes/Hive视图和索引.md
@@ -234,3 +234,6 @@ SHOW INDEX ON emp;
2. [Materialized views](https://cwiki.apache.org/confluence/display/Hive/Materialized+views)
3. [Hive 索引](http://lxw1234.com/archives/2015/05/207.htm)
4. [Overview of Hive Indexes](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Indexing)
+
+
+
\ No newline at end of file
diff --git a/notes/Kafka消费者详解.md b/notes/Kafka消费者详解.md
index 8c8b234..0642de1 100644
--- a/notes/Kafka消费者详解.md
+++ b/notes/Kafka消费者详解.md
@@ -390,3 +390,6 @@ broker 返回给消费者数据的等待时间,默认是 500ms。
1. Neha Narkhede, Gwen Shapira ,Todd Palino(著) , 薛命灯 (译) . Kafka 权威指南 . 人民邮电出版社 . 2017-12-26
+
+
+
\ No newline at end of file
diff --git a/notes/Kafka深入理解分区副本机制.md b/notes/Kafka深入理解分区副本机制.md
index 20a5499..2b7c92f 100644
--- a/notes/Kafka深入理解分区副本机制.md
+++ b/notes/Kafka深入理解分区副本机制.md
@@ -159,3 +159,6 @@ Exception: Replication factor: 3 larger than available brokers: 1.
1. Neha Narkhede, Gwen Shapira ,Todd Palino(著) , 薛命灯 (译) . Kafka 权威指南 . 人民邮电出版社 . 2017-12-26
2. [Kafka 高性能架构之道](http://www.jasongj.com/kafka/high_throughput/)
+
+
+
\ No newline at end of file
diff --git a/notes/Kafka生产者详解.md b/notes/Kafka生产者详解.md
index 862db02..177a3c6 100644
--- a/notes/Kafka生产者详解.md
+++ b/notes/Kafka生产者详解.md
@@ -362,3 +362,6 @@ acks 参数指定了必须要有多少个分区副本收到消息,生产者才
## 参考资料
1. Neha Narkhede, Gwen Shapira ,Todd Palino(著) , 薛命灯 (译) . Kafka 权威指南 . 人民邮电出版社 . 2017-12-26
+
+
+
\ No newline at end of file
diff --git a/notes/Kafka简介.md b/notes/Kafka简介.md
index 802e791..81808c4 100644
--- a/notes/Kafka简介.md
+++ b/notes/Kafka简介.md
@@ -65,3 +65,6 @@ Broker 是集群 (Cluster) 的组成部分。每一个集群都会选举出一
## 参考资料
Neha Narkhede, Gwen Shapira ,Todd Palino(著) , 薛命灯 (译) . Kafka 权威指南 . 人民邮电出版社 . 2017-12-26
+
+
+
\ No newline at end of file
diff --git a/notes/Scala函数和闭包.md b/notes/Scala函数和闭包.md
index 482255c..2c1f59b 100644
--- a/notes/Scala函数和闭包.md
+++ b/notes/Scala函数和闭包.md
@@ -310,3 +310,6 @@ object ScalaApp extends App {
+
+
+
\ No newline at end of file
diff --git a/notes/Scala列表和集.md b/notes/Scala列表和集.md
index 208d42e..3da271f 100644
--- a/notes/Scala列表和集.md
+++ b/notes/Scala列表和集.md
@@ -540,3 +540,6 @@ object ScalaApp extends App {
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. 凯.S.霍斯特曼 . 快学 Scala(第 2 版)[M] . 电子工业出版社 . 2017-7
+
+
+
\ No newline at end of file
diff --git a/notes/Scala基本数据类型和运算符.md b/notes/Scala基本数据类型和运算符.md
index d54fb46..308d5e0 100644
--- a/notes/Scala基本数据类型和运算符.md
+++ b/notes/Scala基本数据类型和运算符.md
@@ -272,3 +272,6 @@ res6: Boolean = true
## 参考资料
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
+
+
+
\ No newline at end of file
diff --git a/notes/Scala数组.md b/notes/Scala数组.md
index d7215a8..d576243 100644
--- a/notes/Scala数组.md
+++ b/notes/Scala数组.md
@@ -191,3 +191,6 @@ object ScalaApp extends App {
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. 凯.S.霍斯特曼 . 快学 Scala(第 2 版)[M] . 电子工业出版社 . 2017-7
+
+
+
\ No newline at end of file
diff --git a/notes/Scala映射和元组.md b/notes/Scala映射和元组.md
index ec47e12..eb4b5af 100644
--- a/notes/Scala映射和元组.md
+++ b/notes/Scala映射和元组.md
@@ -280,3 +280,6 @@ object ScalaApp extends App {
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. 凯.S.霍斯特曼 . 快学 Scala(第 2 版)[M] . 电子工业出版社 . 2017-7
+
+
+
\ No newline at end of file
diff --git a/notes/Scala模式匹配.md b/notes/Scala模式匹配.md
index 7c65267..91ddad5 100644
--- a/notes/Scala模式匹配.md
+++ b/notes/Scala模式匹配.md
@@ -170,3 +170,6 @@ object ScalaApp extends App {
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. 凯.S.霍斯特曼 . 快学 Scala(第 2 版)[M] . 电子工业出版社 . 2017-7
+
+
+
\ No newline at end of file
diff --git a/notes/Scala流程控制语句.md b/notes/Scala流程控制语句.md
index 8b1c219..78f9988 100644
--- a/notes/Scala流程控制语句.md
+++ b/notes/Scala流程控制语句.md
@@ -209,3 +209,6 @@ println(s"Hello, ${name}! Next year, you will be ${age + 1}.")
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. 凯.S.霍斯特曼 . 快学 Scala(第 2 版)[M] . 电子工业出版社 . 2017-7
+
+
+
\ No newline at end of file
diff --git a/notes/Scala简介及开发环境配置.md b/notes/Scala简介及开发环境配置.md
index e16065e..57f64db 100644
--- a/notes/Scala简介及开发环境配置.md
+++ b/notes/Scala简介及开发环境配置.md
@@ -131,3 +131,6 @@ IDEA 默认不支持 Scala 语言的开发,需要通过插件进行扩展。
1. Martin Odersky(著),高宇翔 (译) . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. https://www.scala-lang.org/download/
+
+
+
\ No newline at end of file
diff --git a/notes/Scala类和对象.md b/notes/Scala类和对象.md
index 881ebe2..65cb39a 100644
--- a/notes/Scala类和对象.md
+++ b/notes/Scala类和对象.md
@@ -410,3 +410,6 @@ true
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. 凯.S.霍斯特曼 . 快学 Scala(第 2 版)[M] . 电子工业出版社 . 2017-7
+
+
+
\ No newline at end of file
diff --git a/notes/Scala类型参数.md b/notes/Scala类型参数.md
index 26be482..299391f 100644
--- a/notes/Scala类型参数.md
+++ b/notes/Scala类型参数.md
@@ -465,3 +465,6 @@ def min[T <: SuperComparable[T]](p: Pair[T]) = {}
1. Martin Odersky . Scala 编程 (第 3 版)[M] . 电子工业出版社 . 2018-1-1
2. 凯.S.霍斯特曼 . 快学 Scala(第 2 版)[M] . 电子工业出版社 . 2017-7
+
+
+
\ No newline at end of file
diff --git a/notes/Scala继承和特质.md b/notes/Scala继承和特质.md
index 5b6bd4a..9e871cb 100644
--- a/notes/Scala继承和特质.md
+++ b/notes/Scala继承和特质.md
@@ -416,3 +416,6 @@ class Employee extends Person with InfoLogger with ErrorLogger {...}
+
+
+
\ No newline at end of file
diff --git a/notes/Scala隐式转换和隐式参数.md b/notes/Scala隐式转换和隐式参数.md
index 37b63ce..fcf9e21 100644
--- a/notes/Scala隐式转换和隐式参数.md
+++ b/notes/Scala隐式转换和隐式参数.md
@@ -354,3 +354,6 @@ object Pair extends App {
+
+
+
\ No newline at end of file
diff --git a/notes/Scala集合类型.md b/notes/Scala集合类型.md
index edcb831..33d2819 100644
--- a/notes/Scala集合类型.md
+++ b/notes/Scala集合类型.md
@@ -257,3 +257,6 @@ res8: Boolean = false
1. https://docs.scala-lang.org/overviews/collections/overview.html
2. https://docs.scala-lang.org/overviews/collections/trait-traversable.html
3. https://docs.scala-lang.org/overviews/collections/trait-iterable.html
+
+
+
\ No newline at end of file
diff --git a/notes/SparkSQL_Dataset和DataFrame简介.md b/notes/SparkSQL_Dataset和DataFrame简介.md
index 3ddc40c..d6e0672 100644
--- a/notes/SparkSQL_Dataset和DataFrame简介.md
+++ b/notes/SparkSQL_Dataset和DataFrame简介.md
@@ -145,3 +145,6 @@ DataFrame、DataSet 和 Spark SQL 的实际执行流程都是相同的:
2. [Spark SQL, DataFrames and Datasets Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html)
3. [且谈 Apache Spark 的 API 三剑客:RDD、DataFrame 和 Dataset(译文)](https://www.infoq.cn/article/three-apache-spark-apis-rdds-dataframes-and-datasets)
4. [A Tale of Three Apache Spark APIs: RDDs vs DataFrames and Datasets(原文)](https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html)
+
+
+
\ No newline at end of file
diff --git a/notes/SparkSQL外部数据源.md b/notes/SparkSQL外部数据源.md
index f2dd0fe..765a75f 100644
--- a/notes/SparkSQL外部数据源.md
+++ b/notes/SparkSQL外部数据源.md
@@ -497,3 +497,6 @@ df.write.option(“maxRecordsPerFile”, 5000)
1. Matei Zaharia, Bill Chambers . Spark: The Definitive Guide[M] . 2018-02
2. https://spark.apache.org/docs/latest/sql-data-sources.html
+
+
+
\ No newline at end of file
diff --git a/notes/SparkSQL常用聚合函数.md b/notes/SparkSQL常用聚合函数.md
index c94a15c..cf70bce 100644
--- a/notes/SparkSQL常用聚合函数.md
+++ b/notes/SparkSQL常用聚合函数.md
@@ -337,3 +337,6 @@ object SparkSqlApp {
## 参考资料
1. Matei Zaharia, Bill Chambers . Spark: The Definitive Guide[M] . 2018-02
+
+
+
\ No newline at end of file
diff --git a/notes/SparkSQL联结操作.md b/notes/SparkSQL联结操作.md
index 23178f1..5b178ae 100644
--- a/notes/SparkSQL联结操作.md
+++ b/notes/SparkSQL联结操作.md
@@ -183,3 +183,6 @@ empDF.join(broadcast(deptDF), joinExpression).show()
## 参考资料
1. Matei Zaharia, Bill Chambers . Spark: The Definitive Guide[M] . 2018-02
+
+
+
\ No newline at end of file
diff --git a/notes/Spark_RDD.md b/notes/Spark_RDD.md
index 9b28445..1ae1a24 100644
--- a/notes/Spark_RDD.md
+++ b/notes/Spark_RDD.md
@@ -235,3 +235,6 @@ RDD(s) 及其之间的依赖关系组成了 DAG(有向无环图),DAG 定义了
+
+
+
\ No newline at end of file
diff --git a/notes/Spark_Streaming与流处理.md b/notes/Spark_Streaming与流处理.md
index 0c8b719..22cd826 100644
--- a/notes/Spark_Streaming与流处理.md
+++ b/notes/Spark_Streaming与流处理.md
@@ -77,3 +77,6 @@ storm 和 Flink 都是真正意义上的流计算框架,但 Spark Streaming
1. [Spark Streaming Programming Guide](https://spark.apache.org/docs/latest/streaming-programming-guide.html)
2. [What is stream processing?](https://www.ververica.com/what-is-stream-processing)
+
+
+
\ No newline at end of file
diff --git a/notes/Spark_Streaming基本操作.md b/notes/Spark_Streaming基本操作.md
index a1d1d97..1e8cf47 100644
--- a/notes/Spark_Streaming基本操作.md
+++ b/notes/Spark_Streaming基本操作.md
@@ -333,3 +333,6 @@ storm storm flink azkaban
## 参考资料
Spark 官方文档:http://spark.apache.org/docs/latest/streaming-programming-guide.html
+
+
+
\ No newline at end of file
diff --git a/notes/Spark_Streaming整合Flume.md b/notes/Spark_Streaming整合Flume.md
index e2a9e6f..e79b141 100644
--- a/notes/Spark_Streaming整合Flume.md
+++ b/notes/Spark_Streaming整合Flume.md
@@ -357,3 +357,6 @@ spark-submit \
- [streaming-flume-integration](https://spark.apache.org/docs/latest/streaming-flume-integration.html)
- 关于大数据应用常用的打包方式可以参见:[大数据应用常用打包方式](https://github.com/heibaiying/BigData-Notes/blob/master/notes/大数据应用常用打包方式.md)
+
+
+
\ No newline at end of file
diff --git a/notes/Spark_Streaming整合Kafka.md b/notes/Spark_Streaming整合Kafka.md
index 375f30c..67b5db4 100644
--- a/notes/Spark_Streaming整合Kafka.md
+++ b/notes/Spark_Streaming整合Kafka.md
@@ -319,3 +319,6 @@ bin/kafka-console-producer.sh --broker-list hadoop001:9092 --topic spark-streami
## 参考资料
1. https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
+
+
+
\ No newline at end of file
diff --git a/notes/Spark_Structured_API的基本使用.md b/notes/Spark_Structured_API的基本使用.md
index 30056aa..6be29c5 100644
--- a/notes/Spark_Structured_API的基本使用.md
+++ b/notes/Spark_Structured_API的基本使用.md
@@ -242,3 +242,6 @@ spark.sql("SELECT ename,job FROM global_temp.gemp").show()
## 参考资料
[Spark SQL, DataFrames and Datasets Guide > Getting Started](https://spark.apache.org/docs/latest/sql-getting-started.html)
+
+
+
\ No newline at end of file
diff --git a/notes/Spark_Transformation和Action算子.md b/notes/Spark_Transformation和Action算子.md
index 150f0bb..c830bcc 100644
--- a/notes/Spark_Transformation和Action算子.md
+++ b/notes/Spark_Transformation和Action算子.md
@@ -416,3 +416,6 @@ sc.parallelize(list).saveAsTextFile("/usr/file/temp")
[RDD Programming Guide](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-programming-guide)
+
+
+
\ No newline at end of file
diff --git a/notes/Spark简介.md b/notes/Spark简介.md
index bb91551..53b1baf 100644
--- a/notes/Spark简介.md
+++ b/notes/Spark简介.md
@@ -92,3 +92,6 @@ MLlib 是 Spark 的机器学习库。其设计目标是使得机器学习变得
GraphX 是 Spark 中用于图形计算和图形并行计算的新组件。在高层次上,GraphX 通过引入一个新的图形抽象来扩展 RDD(一种具有附加到每个顶点和边缘的属性的定向多重图形)。为了支持图计算,GraphX 提供了一组基本运算符(如: subgraph,joinVertices 和 aggregateMessages)以及优化后的 Pregel API。此外,GraphX 还包括越来越多的图形算法和构建器,以简化图形分析任务。
##
+
+
+
\ No newline at end of file
diff --git a/notes/Spark累加器与广播变量.md b/notes/Spark累加器与广播变量.md
index ca4da08..a6bdcb8 100644
--- a/notes/Spark累加器与广播变量.md
+++ b/notes/Spark累加器与广播变量.md
@@ -103,3 +103,6 @@ sc.parallelize(broadcastVar.value).map(_ * 10).collect()
[RDD Programming Guide](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-programming-guide)
+
+
+
\ No newline at end of file
diff --git a/notes/Spark部署模式与作业提交.md b/notes/Spark部署模式与作业提交.md
index cb8289b..66749d2 100644
--- a/notes/Spark部署模式与作业提交.md
+++ b/notes/Spark部署模式与作业提交.md
@@ -246,3 +246,6 @@ spark-submit \
+
+
+
\ No newline at end of file
diff --git a/notes/Spring+Mybtais+Phoenix整合.md b/notes/Spring+Mybtais+Phoenix整合.md
index 29f0660..4a2698b 100644
--- a/notes/Spring+Mybtais+Phoenix整合.md
+++ b/notes/Spring+Mybtais+Phoenix整合.md
@@ -384,3 +384,6 @@ UPSERT INTO us_population VALUES('CA','San Diego',1255540);
UPSERT INTO us_population VALUES('CA','San Jose',912332);
```
+
+
+
\ No newline at end of file
diff --git a/notes/Sqoop基本使用.md b/notes/Sqoop基本使用.md
index d4cec85..513a0b5 100644
--- a/notes/Sqoop基本使用.md
+++ b/notes/Sqoop基本使用.md
@@ -385,3 +385,6 @@ $ sqoop import ... --map-column-java id=String,value=Integer
## 参考资料
[Sqoop User Guide (v1.4.7)](http://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html)
+
+
+
\ No newline at end of file
diff --git a/notes/Sqoop简介与安装.md b/notes/Sqoop简介与安装.md
index 7603b8d..27b4a90 100644
--- a/notes/Sqoop简介与安装.md
+++ b/notes/Sqoop简介与安装.md
@@ -145,3 +145,6 @@ if [ ! -d "${ZOOKEEPER_HOME}" ]; then
fi
```
+
+
+
\ No newline at end of file
diff --git a/notes/Storm三种打包方式对比分析.md b/notes/Storm三种打包方式对比分析.md
index e021817..5b39fa7 100644
--- a/notes/Storm三种打包方式对比分析.md
+++ b/notes/Storm三种打包方式对比分析.md
@@ -313,3 +313,6 @@ jar:file:/usr/appjar/storm-hdfs-integration-1.0.jar!/defaults.yaml]
## 参考资料
关于 maven-shade-plugin 的更多配置可以参考: [maven-shade-plugin 入门指南](https://www.jianshu.com/p/7a0e20b30401)
+
+
+
\ No newline at end of file
diff --git a/notes/Storm和流处理简介.md b/notes/Storm和流处理简介.md
index 2949d75..e505c00 100644
--- a/notes/Storm和流处理简介.md
+++ b/notes/Storm和流处理简介.md
@@ -96,3 +96,6 @@ storm 和 Flink 都是真正意义上的实时计算框架。其对比如下:
1. [What is stream processing?](https://www.ververica.com/what-is-stream-processing)
2. [流计算框架 Flink 与 Storm 的性能对比](http://bigdata.51cto.com/art/201711/558416.htm)
+
+
+
\ No newline at end of file
diff --git a/notes/Storm核心概念详解.md b/notes/Storm核心概念详解.md
index 11cf338..b7208f3 100644
--- a/notes/Storm核心概念详解.md
+++ b/notes/Storm核心概念详解.md
@@ -157,3 +157,6 @@ Task 是组成 Component 的代码单元。Topology 启动后,1 个 Component
3. [Understanding the Parallelism of a Storm Topology](http://storm.apache.org/releases/1.2.2/Understanding-the-parallelism-of-a-Storm-topology.html)
4. [Storm nimbus 单节点宕机的处理](https://blog.csdn.net/daiyutage/article/details/52049519)
+
+
+
\ No newline at end of file
diff --git a/notes/Storm编程模型详解.md b/notes/Storm编程模型详解.md
index 1cabfdc..28f328b 100644
--- a/notes/Storm编程模型详解.md
+++ b/notes/Storm编程模型详解.md
@@ -509,3 +509,6 @@ private String productData() {
1. [Running Topologies on a Production Cluster](http://storm.apache.org/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html)
2. [Pre-defined Descriptor Files](http://maven.apache.org/plugins/maven-assembly-plugin/descriptor-refs.html)
+
+
+
\ No newline at end of file
diff --git a/notes/Storm集成HBase和HDFS.md b/notes/Storm集成HBase和HDFS.md
index 2c9c146..53ac365 100644
--- a/notes/Storm集成HBase和HDFS.md
+++ b/notes/Storm集成HBase和HDFS.md
@@ -487,3 +487,6 @@ SimpleHBaseMapper mapper = new SimpleHBaseMapper()
1. [Apache HDFS Integration](http://storm.apache.org/releases/2.0.0-SNAPSHOT/storm-hdfs.html)
2. [Apache HBase Integration](http://storm.apache.org/releases/2.0.0-SNAPSHOT/storm-hbase.html)
+
+
+
\ No newline at end of file
diff --git a/notes/Storm集成Kakfa.md b/notes/Storm集成Kakfa.md
index 5645bc0..f27717f 100644
--- a/notes/Storm集成Kakfa.md
+++ b/notes/Storm集成Kakfa.md
@@ -365,3 +365,6 @@ public class DefaultRecordTranslator implements RecordTranslator {
## 参考资料
1. [Storm Kafka Integration (0.10.x+)](http://storm.apache.org/releases/2.0.0-SNAPSHOT/storm-kafka-client.html)
+
+
+
\ No newline at end of file
diff --git a/notes/Storm集成Redis详解.md b/notes/Storm集成Redis详解.md
index 13cbaa4..46f2ff7 100644
--- a/notes/Storm集成Redis详解.md
+++ b/notes/Storm集成Redis详解.md
@@ -653,3 +653,6 @@ public class CustomRedisCountApp {
## 参考资料
1. [Storm Redis Integration](http://storm.apache.org/releases/2.0.0-SNAPSHOT/storm-redis.html)
+
+
+
\ No newline at end of file
diff --git a/notes/Zookeeper_ACL权限控制.md b/notes/Zookeeper_ACL权限控制.md
index 478a730..5b4b6ff 100644
--- a/notes/Zookeeper_ACL权限控制.md
+++ b/notes/Zookeeper_ACL权限控制.md
@@ -281,3 +281,6 @@ public class AclOperation {
```
> 完整源码见本仓库: https://github.com/heibaiying/BigData-Notes/tree/master/code/Zookeeper/curator
+
+
+
\ No newline at end of file
diff --git a/notes/Zookeeper_Java客户端Curator.md b/notes/Zookeeper_Java客户端Curator.md
index 0725f5f..eba7ca5 100644
--- a/notes/Zookeeper_Java客户端Curator.md
+++ b/notes/Zookeeper_Java客户端Curator.md
@@ -334,3 +334,6 @@ public void permanentChildrenNodesWatch() throws Exception {
Thread.sleep(1000 * 1000); //休眠以观察测试效果
}
```
+
+
+
\ No newline at end of file
diff --git a/notes/Zookeeper常用Shell命令.md b/notes/Zookeeper常用Shell命令.md
index 2358117..2b48bcf 100644
--- a/notes/Zookeeper常用Shell命令.md
+++ b/notes/Zookeeper常用Shell命令.md
@@ -263,3 +263,6 @@ Mode: standalone
Node count: 167
```
+
+
+
\ No newline at end of file
diff --git a/notes/Zookeeper简介及核心概念.md b/notes/Zookeeper简介及核心概念.md
index 29dc72d..6bfcd57 100644
--- a/notes/Zookeeper简介及核心概念.md
+++ b/notes/Zookeeper简介及核心概念.md
@@ -205,3 +205,6 @@ Zookeeper 还能解决大多数分布式系统中的问题:
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Azkaban_3.x_编译及部署.md b/notes/installation/Azkaban_3.x_编译及部署.md
index b9984b2..d2a45b7 100644
--- a/notes/installation/Azkaban_3.x_编译及部署.md
+++ b/notes/installation/Azkaban_3.x_编译及部署.md
@@ -122,3 +122,6 @@ tar -zxvf azkaban-solo-server-3.70.0.tar.gz
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Flink_Standalone_Cluster.md b/notes/installation/Flink_Standalone_Cluster.md
index 0ccf0d6..bfd2025 100644
--- a/notes/installation/Flink_Standalone_Cluster.md
+++ b/notes/installation/Flink_Standalone_Cluster.md
@@ -268,3 +268,6 @@ the classpath/dependencies.
+ [Standalone Cluster](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/deployment/cluster_setup.html#standalone-cluster)
+ [JobManager High Availability (HA)](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/jobmanager_high_availability.html)
+
+
+
\ No newline at end of file
diff --git a/notes/installation/HBase单机环境搭建.md b/notes/installation/HBase单机环境搭建.md
index 56cc598..566b6a9 100644
--- a/notes/installation/HBase单机环境搭建.md
+++ b/notes/installation/HBase单机环境搭建.md
@@ -225,3 +225,6 @@ hadoop001
验证方式二 :访问 HBase Web UI 界面,需要注意的是 1.2 版本的 HBase 的访问端口为 `60010`
+
+
+
\ No newline at end of file
diff --git a/notes/installation/HBase集群环境搭建.md b/notes/installation/HBase集群环境搭建.md
index d57bf60..80a7182 100644
--- a/notes/installation/HBase集群环境搭建.md
+++ b/notes/installation/HBase集群环境搭建.md
@@ -198,3 +198,6 @@ hadoop002 上的 HBase 出于备用状态:
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Hadoop单机环境搭建.md b/notes/installation/Hadoop单机环境搭建.md
index 0531668..1bc301d 100644
--- a/notes/installation/Hadoop单机环境搭建.md
+++ b/notes/installation/Hadoop单机环境搭建.md
@@ -260,3 +260,6 @@ cp mapred-site.xml.template mapred-site.xml
方式二:查看 Web UI 界面,端口号为 `8088`:
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Hadoop集群环境搭建.md b/notes/installation/Hadoop集群环境搭建.md
index 477dea4..88515d0 100644
--- a/notes/installation/Hadoop集群环境搭建.md
+++ b/notes/installation/Hadoop集群环境搭建.md
@@ -231,3 +231,6 @@ start-yarn.sh
hadoop jar /usr/app/hadoop-2.6.0-cdh5.15.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.15.2.jar pi 3 3
```
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Linux下Flume的安装.md b/notes/installation/Linux下Flume的安装.md
index 2e8b46a..3aef07f 100644
--- a/notes/installation/Linux下Flume的安装.md
+++ b/notes/installation/Linux下Flume的安装.md
@@ -66,3 +66,6 @@ export JAVA_HOME=/usr/java/jdk1.8.0_201

+
+
+
\ No newline at end of file
diff --git a/notes/installation/Linux下JDK安装.md b/notes/installation/Linux下JDK安装.md
index 5459e04..3923a4d 100644
--- a/notes/installation/Linux下JDK安装.md
+++ b/notes/installation/Linux下JDK安装.md
@@ -53,3 +53,6 @@ Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
```
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Linux下Python安装.md b/notes/installation/Linux下Python安装.md
index 20fa191..707f26e 100644
--- a/notes/installation/Linux下Python安装.md
+++ b/notes/installation/Linux下Python安装.md
@@ -69,3 +69,6 @@ Type "help", "copyright", "credits" or "license" for more information.
[root@hadoop001 app]#
```
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Linux环境下Hive的安装部署.md b/notes/installation/Linux环境下Hive的安装部署.md
index 802b23f..22a0e72 100644
--- a/notes/installation/Linux环境下Hive的安装部署.md
+++ b/notes/installation/Linux环境下Hive的安装部署.md
@@ -179,3 +179,6 @@ Hive 内置了 HiveServer 和 HiveServer2 服务,两者都允许客户端使
```
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Spark开发环境搭建.md b/notes/installation/Spark开发环境搭建.md
index c961ed6..60fe77f 100644
--- a/notes/installation/Spark开发环境搭建.md
+++ b/notes/installation/Spark开发环境搭建.md
@@ -176,3 +176,6 @@ IDEA 默认不支持 Scala 语言的开发,需要通过插件进行扩展。
**另外在 IDEA 中以本地模式运行 Spark 项目是不需要在本机搭建 Spark 和 Hadoop 环境的。**
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Spark集群环境搭建.md b/notes/installation/Spark集群环境搭建.md
index 7b4eb2e..24ded20 100644
--- a/notes/installation/Spark集群环境搭建.md
+++ b/notes/installation/Spark集群环境搭建.md
@@ -188,3 +188,6 @@ spark-submit \
100
```
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Storm单机环境搭建.md b/notes/installation/Storm单机环境搭建.md
index efcf586..95806fd 100644
--- a/notes/installation/Storm单机环境搭建.md
+++ b/notes/installation/Storm单机环境搭建.md
@@ -79,3 +79,6 @@ nohup sh storm logviewer &
验证方式二: 访问 8080 端口,查看 Web-UI 界面:
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Storm集群环境搭建.md b/notes/installation/Storm集群环境搭建.md
index 69b9c21..2afc41e 100644
--- a/notes/installation/Storm集群环境搭建.md
+++ b/notes/installation/Storm集群环境搭建.md
@@ -165,3 +165,6 @@ nohup sh storm logviewer &
这里手动模拟主 `Nimbus` 异常的情况,在 hadoop001 上使用 `kill` 命令杀死 `Nimbus` 的线程,此时可以看到 hadoop001 上的 `Nimbus` 已经处于 `offline` 状态,而 hadoop002 上的 `Nimbus` 则成为新的 `Leader`。
+
+
+
\ No newline at end of file
diff --git a/notes/installation/Zookeeper单机环境和集群环境搭建.md b/notes/installation/Zookeeper单机环境和集群环境搭建.md
index eec970b..7698a4f 100644
--- a/notes/installation/Zookeeper单机环境和集群环境搭建.md
+++ b/notes/installation/Zookeeper单机环境和集群环境搭建.md
@@ -185,3 +185,6 @@ echo "3" > /usr/local/zookeeper-cluster/data/myid
+
+
+
\ No newline at end of file
diff --git a/notes/installation/基于Zookeeper搭建Hadoop高可用集群.md b/notes/installation/基于Zookeeper搭建Hadoop高可用集群.md
index d54db8d..5d69f31 100644
--- a/notes/installation/基于Zookeeper搭建Hadoop高可用集群.md
+++ b/notes/installation/基于Zookeeper搭建Hadoop高可用集群.md
@@ -512,3 +512,6 @@ yarn-daemon.sh start resourcemanager
[Hadoop NameNode 高可用 (High Availability) 实现解析](https://www.ibm.com/developerworks/cn/opensource/os-cn-hadoop-name-node/index.html)
+
+
+
\ No newline at end of file
diff --git a/notes/installation/基于Zookeeper搭建Kafka高可用集群.md b/notes/installation/基于Zookeeper搭建Kafka高可用集群.md
index 3a3fb6b..85502f1 100644
--- a/notes/installation/基于Zookeeper搭建Kafka高可用集群.md
+++ b/notes/installation/基于Zookeeper搭建Kafka高可用集群.md
@@ -237,3 +237,6 @@ bin/kafka-topics.sh --describe --bootstrap-server hadoop001:9092 --topic my-repl
+
+
+
\ No newline at end of file
diff --git a/notes/installation/虚拟机静态IP及多IP配置.md b/notes/installation/虚拟机静态IP及多IP配置.md
index aee8ceb..fc6c49e 100644
--- a/notes/installation/虚拟机静态IP及多IP配置.md
+++ b/notes/installation/虚拟机静态IP及多IP配置.md
@@ -116,3 +116,6 @@ DEVICE=enp0s8
使用时只需要根据所处的网络环境,勾选对应的网卡即可,不使用的网卡尽量不要勾选启动。
+
+
+
\ No newline at end of file
diff --git a/notes/大数据学习路线.md b/notes/大数据学习路线.md
index acaffe0..fe1520c 100644
--- a/notes/大数据学习路线.md
+++ b/notes/大数据学习路线.md
@@ -167,3 +167,6 @@ Scala 是一门综合了面向对象和函数式编程概念的静态类型的
以上就是个人关于大数据的学习心得和路线推荐。本片文章对大数据技术栈做了比较狭义的限定,随着学习的深入,大家也可以把 Python 语言、推荐系统、机器学习等逐步加入到自己的大数据技术栈中。
+
+
+
\ No newline at end of file
diff --git a/notes/大数据常用软件安装指南.md b/notes/大数据常用软件安装指南.md
index 390afb8..6c39b51 100644
--- a/notes/大数据常用软件安装指南.md
+++ b/notes/大数据常用软件安装指南.md
@@ -65,3 +65,6 @@ hadoop-2.6.0-cdh5.15.2.tar.gz
hbase-1.2.0-cdh5.15.2
hive-1.1.0-cdh5.15.2.tar.gz
```
+
+
+
\ No newline at end of file
diff --git a/notes/大数据应用常用打包方式.md b/notes/大数据应用常用打包方式.md
index f166e3d..adab226 100644
--- a/notes/大数据应用常用打包方式.md
+++ b/notes/大数据应用常用打包方式.md
@@ -304,3 +304,6 @@ Strom 官方文档 Running Topologies on a Production Cluster 章节:
+ maven-dependency-plugin : http://maven.apache.org/components/plugins/maven-dependency-plugin/
关于 maven-shade-plugin 的更多配置也可以参考该博客: [maven-shade-plugin 入门指南](https://www.jianshu.com/p/7a0e20b30401)
+
+
+
\ No newline at end of file
diff --git a/notes/大数据技术栈思维导图.md b/notes/大数据技术栈思维导图.md
index a28d256..d6e312c 100644
--- a/notes/大数据技术栈思维导图.md
+++ b/notes/大数据技术栈思维导图.md
@@ -1,2 +1,5 @@
+
+
+
\ No newline at end of file
diff --git a/notes/资料分享与工具推荐.md b/notes/资料分享与工具推荐.md
index b7e3b33..031d758 100644
--- a/notes/资料分享与工具推荐.md
+++ b/notes/资料分享与工具推荐.md
@@ -54,3 +54,6 @@ ProcessOn 式一个在线绘图平台,使用起来非常便捷,可以用于
官方网站:https://www.processon.com/
+
+
+
\ No newline at end of file
diff --git a/pictures/weixin-desc.png b/pictures/weixin-desc.png
index 3217ead..e6a2b6d 100644
Binary files a/pictures/weixin-desc.png and b/pictures/weixin-desc.png differ