mapreduce词频统计案例
@ -4,13 +4,12 @@
|
|||||||
<a href="#一MapReduce-概述">一、MapReduce 概述</a><br/>
|
<a href="#一MapReduce-概述">一、MapReduce 概述</a><br/>
|
||||||
<a href="#二MapReduce-编程模型简述">二、MapReduce 编程模型简述</a><br/>
|
<a href="#二MapReduce-编程模型简述">二、MapReduce 编程模型简述</a><br/>
|
||||||
<a href="#三MapReduce-编程模型详述">三、MapReduce 编程模型详述</a><br/>
|
<a href="#三MapReduce-编程模型详述">三、MapReduce 编程模型详述</a><br/>
|
||||||
<a href="#31-InputFormat-&-RecordReaders">3.1 InputFormat & RecordReaders </a><br/>
|
|
||||||
<a href="#32-combiner">3.2 combiner</a><br/>
|
|
||||||
<a href="#33-partitioner">3.3 partitioner</a><br/>
|
|
||||||
<a href="#34-sort-&-combiner">3.4 sort & combiner</a><br/>
|
|
||||||
<a href="#四MapReduce-词频统计案例">四、MapReduce 词频统计案例</a><br/>
|
<a href="#四MapReduce-词频统计案例">四、MapReduce 词频统计案例</a><br/>
|
||||||
|
<a href="#五词频统计案例进阶">五、词频统计案例进阶</a><br/>
|
||||||
</nav>
|
</nav>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 一、MapReduce 概述
|
## 一、MapReduce 概述
|
||||||
|
|
||||||
Hadoop MapReduce是一个分布式计算框架,用于编写应用程序,以可靠,容错的方式在大型集群上并行处理大量数据(多为TB级别数据集)。
|
Hadoop MapReduce是一个分布式计算框架,用于编写应用程序,以可靠,容错的方式在大型集群上并行处理大量数据(多为TB级别数据集)。
|
||||||
@ -75,11 +74,23 @@ public abstract class InputFormat<K, V> {
|
|||||||
|
|
||||||
### 3.2 combiner
|
### 3.2 combiner
|
||||||
|
|
||||||
combiner是map运算后的可选操作,其实际上是一个本地化的reduce操作,它主要是在map计算出中间文件后做一个简单的合并重复key值的操作。例如我们对文件里的单词频率做统计,map计算时候如果碰到一个hadoop的单词就会记录为1,但是这篇文章里hadoop可能会出现n多次,那么map输出文件冗余就会很多,因此在reduce计算前对相同的key做一个合并操作,那么文件会变小。这样就提高了宽带的传输效率,因为hadoop计算的宽带资源往往是计算的瓶颈也是最为宝贵的资源。
|
combiner是map运算后的可选操作,其实际上是一个本地化的reduce操作,它主要是在map计算出中间文件后做一个简单的合并重复key值的操作。
|
||||||
|
|
||||||
|
例如我们对文件里的单词频率做统计,map计算时候如果碰到一个hadoop的单词就会记录为1,但是这篇文章里hadoop可能会出现n多次,那么map输出文件冗余就会很多,因此在reduce计算前对相同的key做一个合并操作,那么文件会变小。这样就提高了宽带的传输效率,因为hadoop计算的宽带资源往往是计算的瓶颈也是最为宝贵的资源。
|
||||||
|
|
||||||
但并非所有场景都适合使用combiner,使用它的原则是combiner的输入不会影响到reduce计算的最终输入,例如:如果计算只是求总数,最大值,最小值可以使用combiner,但是做平均值计算使用combiner的话,最终的reduce计算结果就会出错。
|
但并非所有场景都适合使用combiner,使用它的原则是combiner的输入不会影响到reduce计算的最终输入,例如:如果计算只是求总数,最大值,最小值可以使用combiner,但是做平均值计算使用combiner的话,最终的reduce计算结果就会出错。
|
||||||
|
|
||||||
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/mapreduce-combiner.png"/> </div>
|
不使用combiner的情况:
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/mapreduce-without-combiners.png"/> </div>
|
||||||
|
|
||||||
|
使用combiner的情况:
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/mapreduce-with-combiners.png"/> </div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
可以看到使用combiner的时候,需要传输到reducer中的数据由12keys,降低到10keys。降低的幅度取决于你keys的重复率,后文词频统计案例可以直观演示用combiner降低数百倍的传输量。
|
||||||
|
|
||||||
### 3.3 partitioner
|
### 3.3 partitioner
|
||||||
|
|
||||||
@ -99,6 +110,364 @@ Merge是怎样的?如“aaa”从某个map task读取过来时值是5,从另
|
|||||||
|
|
||||||
## 四、MapReduce 词频统计案例
|
## 四、MapReduce 词频统计案例
|
||||||
|
|
||||||
|
### 4.1 项目简介
|
||||||
|
|
||||||
|
这里给出一个经典的案例:词频统计。统计如下样本数据中每个单词出现的次数。
|
||||||
|
|
||||||
|
```properties
|
||||||
|
Spark HBase
|
||||||
|
Hive Flink Storm Hadoop HBase Spark
|
||||||
|
Flink
|
||||||
|
HBase Storm
|
||||||
|
HBase Hadoop Hive Flink
|
||||||
|
HBase Flink Hive Storm
|
||||||
|
Hive Flink Hadoop
|
||||||
|
HBase Hive
|
||||||
|
Hadoop Spark HBase Storm
|
||||||
|
HBase Hadoop Hive Flink
|
||||||
|
HBase Flink Hive Storm
|
||||||
|
Hive Flink Hadoop
|
||||||
|
HBase Hive
|
||||||
|
```
|
||||||
|
|
||||||
|
为方便大家开发,我在项目源码中放置了一个工具类`WordCountDataUtils`,用于产生词频统计样本文件:
|
||||||
|
|
||||||
|
+ 支持产生样本文件到本地,适用于本地测试;
|
||||||
|
+ 支持产生样本文件并直接输出到HDFS,适用于提交到服务器测试;
|
||||||
|
|
||||||
|
> 本篇文章所有源码下载地址:[hadoop-word-count](https://github.com/heibaiying/BigData-Notes/tree/master/code/Hadoop/hadoop-word-count)
|
||||||
|
|
||||||
|
```java
|
||||||
|
/**
|
||||||
|
* 产生词频统计模拟数据
|
||||||
|
*/
|
||||||
|
public class WordCountDataUtils {
|
||||||
|
|
||||||
|
public static final List<String> WORD_LIST = Arrays.asList("Spark", "Hadoop", "HBase", "Storm", "Flink", "Hive");
|
||||||
|
|
||||||
|
|
||||||
|
/**
|
||||||
|
* 模拟产生词频数据
|
||||||
|
*
|
||||||
|
* @return 词频数据
|
||||||
|
*/
|
||||||
|
private static String generateData() {
|
||||||
|
StringBuilder builder = new StringBuilder();
|
||||||
|
for (int i = 0; i < 1000; i++) {
|
||||||
|
Collections.shuffle(WORD_LIST);
|
||||||
|
Random random = new Random();
|
||||||
|
int endIndex = random.nextInt(WORD_LIST.size()) % (WORD_LIST.size()) + 1;
|
||||||
|
String line = StringUtils.join(WORD_LIST.toArray(), "\t", 0, endIndex);
|
||||||
|
builder.append(line).append("\n");
|
||||||
|
}
|
||||||
|
return builder.toString();
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/**
|
||||||
|
* 模拟产生词频数据并输出到本地
|
||||||
|
*
|
||||||
|
* @param outputPath 输出文件路径
|
||||||
|
*/
|
||||||
|
private static void generateDataToLocal(String outputPath) {
|
||||||
|
try {
|
||||||
|
java.nio.file.Path path = Paths.get(outputPath);
|
||||||
|
if (Files.exists(path)) {
|
||||||
|
Files.delete(path);
|
||||||
|
}
|
||||||
|
Files.write(path, generateData().getBytes(), StandardOpenOption.CREATE);
|
||||||
|
} catch (IOException e) {
|
||||||
|
e.printStackTrace();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* 模拟产生词频数据并输出到HDFS
|
||||||
|
*
|
||||||
|
* @param hdfsUrl HDFS地址
|
||||||
|
* @param user hadoop用户名
|
||||||
|
* @param outputPathString 存储到HDFS上的路径
|
||||||
|
*/
|
||||||
|
private static void generateDataToHDFS(String hdfsUrl, String user, String outputPathString) {
|
||||||
|
FileSystem fileSystem = null;
|
||||||
|
try {
|
||||||
|
fileSystem = FileSystem.get(new URI(hdfsUrl), new Configuration(), user);
|
||||||
|
Path outputPath = new Path(outputPathString);
|
||||||
|
if (fileSystem.exists(outputPath)) {
|
||||||
|
fileSystem.delete(outputPath, true);
|
||||||
|
}
|
||||||
|
FSDataOutputStream out = fileSystem.create(outputPath);
|
||||||
|
out.write(generateData().getBytes());
|
||||||
|
out.flush();
|
||||||
|
out.close();
|
||||||
|
fileSystem.close();
|
||||||
|
} catch (Exception e) {
|
||||||
|
e.printStackTrace();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void main(String[] args) {
|
||||||
|
//generateDataToLocal("input.txt");
|
||||||
|
generateDataToHDFS("hdfs://192.168.0.107:8020", "root", "/wordcount/input.txt");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 WordCountMapper
|
||||||
|
|
||||||
|
**Mapper代码实现**:
|
||||||
|
|
||||||
|
```java
|
||||||
|
/**
|
||||||
|
* 将每行数据按照指定分隔符进行拆分
|
||||||
|
*/
|
||||||
|
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
|
||||||
|
String[] words = value.toString().split("\t");
|
||||||
|
for (String word : words) {
|
||||||
|
context.write(new Text(word), new IntWritable(1));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**代码说明**:
|
||||||
|
|
||||||
|
Splitting操作已由由Hadoop程序帮我们完成的,WordCountMapper对于下图的Mapping操作,这里WordCountMapper继承自Mapper类,这是一个泛型类,定义如下:
|
||||||
|
|
||||||
|
```java
|
||||||
|
public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {
|
||||||
|
......
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
+ KEYIN : mapping输入的key的数据类型,即每行的偏移量(每行第一个字符在文本中的位置),Long类型,对应Hadoop中的LongWritable类型;
|
||||||
|
+ VALUEIN : mappin输入的value的数据类型,即每行数据;String类型,对应Hadoop中Text类型;
|
||||||
|
+ KEYOUT :mapping输出的key的数据类型,即每个单词;String类型,对应Hadoop中Text类型;
|
||||||
|
+ VALUEOUT:mapping输出的value的数据类型,即每个单词出现的次数;这里用int类型,对应Hadoop中IntWritable类型;
|
||||||
|
|
||||||
|
在MapReduce中必须使用Hadoop定义的类型,因为Hadoop预定义的类型都是可序列化,可比较的,所有类型均实现了`WritableComparable`接口。
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/hadoop-code-mapping.png"/> </div>
|
||||||
|
|
||||||
|
### 4.3 WordCountReducer
|
||||||
|
|
||||||
|
**Reducer代码实现**:
|
||||||
|
|
||||||
|
```java
|
||||||
|
/**
|
||||||
|
* 进行词频统计
|
||||||
|
*/
|
||||||
|
public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
|
||||||
|
int count = 0;
|
||||||
|
for (IntWritable value : values) {
|
||||||
|
count += value.get();
|
||||||
|
}
|
||||||
|
context.write(key, new IntWritable(count));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**代码说明**:
|
||||||
|
|
||||||
|
这里的key显然就是每个单词,这里的values是一个可迭代的数据类型,因为shuffling输出的数据实际上是下图中所示的这样的,即`key,(1,1,1,1,1,1,1,.....)`。values是可迭代的。
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/hadoop-code-reducer.png"/> </div>
|
||||||
|
|
||||||
|
### 4.4 WordCountApp
|
||||||
|
|
||||||
|
组装MapReduce作业,并提交到服务器运行,代码如下:
|
||||||
|
|
||||||
|
```java
|
||||||
|
|
||||||
|
/**
|
||||||
|
* 组装作业 并提交到集群运行
|
||||||
|
*/
|
||||||
|
public class WordCountApp {
|
||||||
|
|
||||||
|
|
||||||
|
// 这里为了直观显示参数 使用了硬编码,实际开发中可以通过外部传参
|
||||||
|
private static final String HDFS_URL = "hdfs://192.168.0.107:8020";
|
||||||
|
private static final String HADOOP_USER_NAME = "root";
|
||||||
|
|
||||||
|
public static void main(String[] args) throws Exception {
|
||||||
|
|
||||||
|
// 文件输入路径和输出路径由外部传参指定
|
||||||
|
if (args.length < 2) {
|
||||||
|
System.out.println("Input and output paths are necessary!");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// 需要指明hadoop用户名,否则在HDFS上创建目录时可能会抛出权限不足的异常
|
||||||
|
System.setProperty("HADOOP_USER_NAME", HADOOP_USER_NAME);
|
||||||
|
|
||||||
|
Configuration configuration = new Configuration();
|
||||||
|
// 指明HDFS的地址
|
||||||
|
configuration.set("fs.defaultFS", HDFS_URL);
|
||||||
|
|
||||||
|
// 创建一个Job
|
||||||
|
Job job = Job.getInstance(configuration);
|
||||||
|
|
||||||
|
// 设置运行的主类
|
||||||
|
job.setJarByClass(WordCountApp.class);
|
||||||
|
|
||||||
|
// 设置Mapper和Reducer
|
||||||
|
job.setMapperClass(WordCountMapper.class);
|
||||||
|
job.setReducerClass(WordCountReducer.class);
|
||||||
|
|
||||||
|
// 设置Mapper输出key和value的类型
|
||||||
|
job.setMapOutputKeyClass(Text.class);
|
||||||
|
job.setMapOutputValueClass(IntWritable.class);
|
||||||
|
|
||||||
|
// 设置Reducer输出key和value的类型
|
||||||
|
job.setOutputKeyClass(Text.class);
|
||||||
|
job.setOutputValueClass(IntWritable.class);
|
||||||
|
|
||||||
|
// 如果输出目录已经存在,则必须先删除,否则重复运行程序时会抛出异常
|
||||||
|
FileSystem fileSystem = FileSystem.get(new URI(HDFS_URL), configuration, HADOOP_USER_NAME);
|
||||||
|
Path outputPath = new Path(args[1]);
|
||||||
|
if (fileSystem.exists(outputPath)) {
|
||||||
|
fileSystem.delete(outputPath, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 设置作业输入文件和输出文件的路径
|
||||||
|
FileInputFormat.setInputPaths(job, new Path(args[0]));
|
||||||
|
FileOutputFormat.setOutputPath(job, outputPath);
|
||||||
|
|
||||||
|
// 将作业提交到群集并等待它完成,参数设置为true代表打印显示对应的进度
|
||||||
|
boolean result = job.waitForCompletion(true);
|
||||||
|
|
||||||
|
// 关闭之前创建的fileSystem
|
||||||
|
fileSystem.close();
|
||||||
|
|
||||||
|
// 根据作业结果,终止当前运行的Java虚拟机,退出程序
|
||||||
|
System.exit(result ? 0 : -1);
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
这里说明一下:`setMapOutputKeyClass`和`setOutputValueClass`控制reducer函数的输出类型。map函数的输出类型默认情况下和reducer函数式相同的,如果不同,则必须通过`setMapOutputKeyClass`和`setMapOutputValueClass`进行设置。
|
||||||
|
|
||||||
|
### 4.5 提交到服务器运行
|
||||||
|
|
||||||
|
在实际开发中,可以在本机配置hadoop开发环境,直接运行`main`方法既可。这里主要介绍一下打包提交到服务器运行:
|
||||||
|
|
||||||
|
由于本项目没有使用除Hadoop外的第三方依赖,直接打包即可:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# mvn clean package
|
||||||
|
```
|
||||||
|
|
||||||
|
使用以下命令运行作业:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
hadoop jar /usr/appjar/hadoop-word-count-1.0.jar \
|
||||||
|
com.heibaiying.WordCountApp \
|
||||||
|
/wordcount/input.txt /wordcount/output/WordCountApp
|
||||||
|
```
|
||||||
|
|
||||||
|
作业完成后查看HDFS上生成目录:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# 查看目录
|
||||||
|
hadoop fs -ls /wordcount/output/WordCountApp
|
||||||
|
|
||||||
|
# 查看统计结果
|
||||||
|
hadoop fs -cat /wordcount/output/WordCountApp/part-r-00000
|
||||||
|
```
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/hadoop-wordcountapp.png"/> </div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## 五、词频统计案例进阶
|
||||||
|
|
||||||
|
### 5.1 combiner
|
||||||
|
|
||||||
|
#### 1. combiner的代码实现
|
||||||
|
|
||||||
|
combiner的代码实现比较简单,只要在组装作业时,添加下面一行代码即可
|
||||||
|
|
||||||
|
```java
|
||||||
|
// 设置Combiner
|
||||||
|
job.setCombinerClass(WordCountReducer.class);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. 测试结果
|
||||||
|
|
||||||
|
加入combiner后统计结果是不会有变化的,但是我们可以从打印的日志看出combiner的效果:
|
||||||
|
|
||||||
|
没有加入combiner的打印日志:
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/hadoop-no-combiner.png"/> </div>
|
||||||
|
|
||||||
|
加入combiner后的打印日志如下。
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/hadoop-combiner.png"/> </div>
|
||||||
|
|
||||||
|
这里我们只有一个输入文件并且小于128M,所以只有一个Map进行处理,可以看到经过combiner后,records由3519降低为6(样本中单词种类就只有6个),这一点从图中日志的`reduce input records`参数也可以看出来。在这个用例中combiner的效果就非常明显。
|
||||||
|
|
||||||
|
### 5.2 Partitioner
|
||||||
|
|
||||||
|
#### 1. 默认Partitioner规则
|
||||||
|
|
||||||
|
这里假设有个需求:将不同单词的统计结果输出到不同文件。这种需求实际上比较常见,比如统计产品的销量时,需要将结果按照产品分类输出。
|
||||||
|
|
||||||
|
要实现这个功能,就需要用到自定义Partitioner,这里我们先说一下默认的分区规则:在构建job时候,如果不指定,默认的使用的是`HashPartitioner`,其实现如下:
|
||||||
|
|
||||||
|
```java
|
||||||
|
public class HashPartitioner<K, V> extends Partitioner<K, V> {
|
||||||
|
|
||||||
|
public int getPartition(K key, V value,
|
||||||
|
int numReduceTasks) {
|
||||||
|
return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
对key进行哈希散列并对`numReduceTasks`取余,这里由于`numReduceTasks`默认值为1,所以我们之前的统计结果都输出到同一个文件中。
|
||||||
|
|
||||||
|
#### 2. 自定义Partitioner
|
||||||
|
|
||||||
|
这里我们继承`Partitioner`自定义分区规则,这里按照单词进行分区:
|
||||||
|
|
||||||
|
```java
|
||||||
|
/**
|
||||||
|
* 自定义partitioner,按照单词分区
|
||||||
|
*/
|
||||||
|
public class CustomPartitioner extends Partitioner<Text, IntWritable> {
|
||||||
|
|
||||||
|
public int getPartition(Text text, IntWritable intWritable, int numPartitions) {
|
||||||
|
return WordCountDataUtils.WORD_LIST.indexOf(text.toString());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
并在构建job时候指定使用我们自己的分区规则,并设置reduce的个数:
|
||||||
|
|
||||||
|
```java
|
||||||
|
// 设置自定义分区规则
|
||||||
|
job.setPartitionerClass(CustomPartitioner.class);
|
||||||
|
// 设置reduce个数
|
||||||
|
job.setNumReduceTasks(WordCountDataUtils.WORD_LIST.size());
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### 3. 测试结果
|
||||||
|
|
||||||
|
测试结果如下,分别生成6个文件,每个文件中为对应单词的统计结果。
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/hadoop-wordcountcombinerpartition.png"/> </div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -107,6 +476,7 @@ Merge是怎样的?如“aaa”从某个map task读取过来时值是5,从另
|
|||||||
|
|
||||||
1. [分布式计算框架MapReduce](https://zhuanlan.zhihu.com/p/28682581)
|
1. [分布式计算框架MapReduce](https://zhuanlan.zhihu.com/p/28682581)
|
||||||
2. [Apache Hadoop 2.9.2 > MapReduce Tutorial](http://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html)
|
2. [Apache Hadoop 2.9.2 > MapReduce Tutorial](http://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html)
|
||||||
|
3. [MapReduce - Combiners]( https://www.tutorialscampus.com/tutorials/map-reduce/combiners.htm)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
116
notes/installation/虚拟机静态IP及多IP配置.md
Normal file
@ -0,0 +1,116 @@
|
|||||||
|
# 虚拟机静态IP及多IP配置
|
||||||
|
|
||||||
|
> 虚拟机环境:centos 7.6
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<a href="#一虚拟机静态IP配置">一、虚拟机静态IP配置</a><br/>
|
||||||
|
<a href="#二虚拟机多个静态IP配置">二、虚拟机多个静态IP配置</a><br/>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## 一、虚拟机静态IP配置
|
||||||
|
|
||||||
|
### 1. 查看当前网卡名称
|
||||||
|
|
||||||
|
使用`ifconfig`,本机网卡名称为`enp0s3`
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/en0s3.png"/> </div>
|
||||||
|
|
||||||
|
### 2. 编辑网络配置文件
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
|
||||||
|
```
|
||||||
|
|
||||||
|
添加如下网络配置:
|
||||||
|
|
||||||
|
+ IPADDR需要和宿主机同一个网段;
|
||||||
|
+ GATEWAY保持和宿主机一致;
|
||||||
|
|
||||||
|
```properties
|
||||||
|
BOOTPROTO=static
|
||||||
|
IPADDR=192.168.0.107
|
||||||
|
NETMASK=255.255.255.0
|
||||||
|
GATEWAY=192.168.0.1
|
||||||
|
DNS1=192.168.0.1
|
||||||
|
ONBOOT=yes
|
||||||
|
```
|
||||||
|
|
||||||
|
我的主机配置:
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/ipconfig.png"/> </div>
|
||||||
|
|
||||||
|
修改后完整配置如下:
|
||||||
|
|
||||||
|
```properties
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/virtualbox-multi-network.png"/> </div>TYPE=Ethernet
|
||||||
|
PROXY_METHOD=none
|
||||||
|
BROWSER_ONLY=no
|
||||||
|
BOOTPROTO=static
|
||||||
|
IPADDR=192.168.0.107
|
||||||
|
NETMASK=255.255.255.0
|
||||||
|
GATEWAY=192.168.0.1
|
||||||
|
BROADCAST=192.168.0.255
|
||||||
|
DNS1=192.168.0.1
|
||||||
|
DEFROUTE=yes
|
||||||
|
IPV4_FAILURE_FATAL=no
|
||||||
|
IPV6INIT=yes
|
||||||
|
IPV6_AUTOCONF=yes
|
||||||
|
IPV6_DEFROUTE=yes
|
||||||
|
IPV6_FAILURE_FATAL=no
|
||||||
|
IPV6_ADDR_GEN_MODE=stable-privacy
|
||||||
|
NAME=enp0s3
|
||||||
|
UUID=03d45df1-8514-4774-9b47-fddd6b9d9fca
|
||||||
|
DEVICE=enp0s3
|
||||||
|
ONBOOT=yes
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. 重启网络服务
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# systemctl restart network
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## 二、虚拟机多个静态IP配置
|
||||||
|
|
||||||
|
这里说一下多个静态IP的使用场景:主要是针对同一台电脑在经常在不同网络环境使用(办公,家庭,学习等),配置好多个IP后,在`hosts`文件中映射到同一个主机名,这样在不同网络中就可以直接启动Hadoop等软件。
|
||||||
|
|
||||||
|
### 1. 配置多网卡
|
||||||
|
|
||||||
|
这里我是用的虚拟机是virtualBox,开启多网卡配置方式如下:
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/virtualbox-multi-network.png"/> </div>
|
||||||
|
|
||||||
|
### 2. 查看网卡名称
|
||||||
|
|
||||||
|
使用`ifconfig`,查看第二块网卡名称,这里我的名称为`enp0s8`。
|
||||||
|
|
||||||
|
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/mutli-net-ip.png"/> </div>
|
||||||
|
|
||||||
|
### 3. 配置第二块网卡
|
||||||
|
|
||||||
|
开启多网卡后并不会自动生成配置文件,需要拷贝`ifcfg-enp0s3`进行修改。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# cp ifcfg-enp0s3 ifcfg-enp0s8
|
||||||
|
```
|
||||||
|
|
||||||
|
静态IP配置方法如上,这里不再赘述。除了静态IP参数外,以下三个参数还需要修改,UUID必须与`ifcfg-enp0s3`中的不一样:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
NAME=enp0s8
|
||||||
|
UUID=03d45df1-8514-4774-9b47-fddd6b9d9fcb
|
||||||
|
DEVICE=enp0s8
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. 重启网络服务器
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# systemctl restart network
|
||||||
|
```
|
||||||
|
|
@ -1,38 +0,0 @@
|
|||||||
# 虚拟机静态IP配置
|
|
||||||
|
|
||||||
> 虚拟机环境:centos 7.6
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 1. 查看当前网卡名称
|
|
||||||
|
|
||||||
本机网卡名称为`enp0s3`
|
|
||||||
|
|
||||||
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/en0s3.png"/> </div>
|
|
||||||
|
|
||||||
### 2. 编辑网络配置文件
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
|
|
||||||
```
|
|
||||||
|
|
||||||
添加如下网络配置,指明静态IP和DNS:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
BOOTPROTO=static
|
|
||||||
IPADDR=192.168.200.226
|
|
||||||
NETMASK=255.255.255.0
|
|
||||||
GATEWAY=192.168.200.254
|
|
||||||
DNS1=114.114.114.114
|
|
||||||
```
|
|
||||||
|
|
||||||
修改后完整配置如下:
|
|
||||||
|
|
||||||
<div align="center"> <img src="https://github.com/heibaiying/BigData-Notes/blob/master/pictures/ifconfig.png"/> </div>
|
|
||||||
|
|
||||||
### 3. 重启网络服务
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# systemctl restart network
|
|
||||||
```
|
|
||||||
|
|
BIN
pictures/hadoop-code-mapping.png
Normal file
After Width: | Height: | Size: 142 KiB |
BIN
pictures/hadoop-code-partitation.png
Normal file
After Width: | Height: | Size: 145 KiB |
BIN
pictures/hadoop-code-reducer.png
Normal file
After Width: | Height: | Size: 268 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 17 KiB |
BIN
pictures/ipconfig.png
Normal file
After Width: | Height: | Size: 5.7 KiB |
BIN
pictures/mapreduce-with-combiners.png
Normal file
After Width: | Height: | Size: 76 KiB |
BIN
pictures/mapreduce-without-combiners.png
Normal file
After Width: | Height: | Size: 45 KiB |
BIN
pictures/mutli-net-ip.png
Normal file
After Width: | Height: | Size: 30 KiB |
BIN
pictures/virtualbox-multi-network.png
Normal file
After Width: | Height: | Size: 20 KiB |