zookeeper权限控制
This commit is contained in:
parent
7d717f8a08
commit
2330ba1569
13
README.md
13
README.md
@ -151,11 +151,11 @@ TODO
|
||||
|
||||
## 十一、Zookeeper
|
||||
|
||||
1. Zookeeper 简介及核心概念
|
||||
1. [Zookeeper 简介及核心概念](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Zookeeper简介及核心概念.md)
|
||||
2. [Zookeeper单机环境和集群环境搭建](https://github.com/heibaiying/BigData-Notes/blob/master/notes/installation/Zookeeper单机环境和集群环境搭建.md)
|
||||
3. Zookeeper分布式锁实现方案
|
||||
4. 集群升级、迁移深入分析 Zookeeper
|
||||
5. Zab协议及选举机制
|
||||
3. [Zookeeper常用Shell命令](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Zookeeper常用Shell命令.md)
|
||||
4. [Zookeeper Java 客户端——Apache Curator](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Zookeeper_Java客户端Curator.md)
|
||||
5. [Zookeeper ACL权限控制](https://github.com/heibaiying/BigData-Notes/blob/master/notes/Zookeeper_ACL权限控制.md)
|
||||
|
||||
## 十二、Scala
|
||||
|
||||
@ -176,7 +176,6 @@ TODO
|
||||
|
||||
|
||||
|
||||
## 十三、公共基础知识
|
||||
## 十三、公共内容
|
||||
|
||||
1. [大数据应用常用打包方式](https://github.com/heibaiying/BigData-Notes/blob/master/notes/大数据应用常用打包方式.md)
|
||||
2. 大数据常用文件格式
|
||||
1. [大数据应用常用打包方式](https://github.com/heibaiying/BigData-Notes/blob/master/notes/大数据应用常用打包方式.md)
|
@ -33,7 +33,7 @@ public class AclOperation {
|
||||
public void prepare() {
|
||||
RetryPolicy retryPolicy = new RetryNTimes(3, 5000);
|
||||
client = CuratorFrameworkFactory.builder()
|
||||
.authorization("digest", "heibai:123456".getBytes())
|
||||
.authorization("digest", "heibai:123456".getBytes()) //等价于addauth命令
|
||||
.connectString(zkServerPath)
|
||||
.sessionTimeoutMs(10000).retryPolicy(retryPolicy)
|
||||
.namespace("workspace").build();
|
||||
|
@ -113,7 +113,7 @@ public class BasicOperation {
|
||||
|
||||
|
||||
/**
|
||||
* 判断子节点是否存在
|
||||
* 判断节点是否存在
|
||||
*/
|
||||
@Test
|
||||
public void existNode() throws Exception {
|
||||
@ -160,7 +160,7 @@ public class BasicOperation {
|
||||
|
||||
|
||||
/**
|
||||
* 针对子节点注册监听
|
||||
* 监听子节点的操作
|
||||
*/
|
||||
@Test
|
||||
public void permanentChildrenNodesWatch() throws Exception {
|
||||
@ -181,7 +181,7 @@ public class BasicOperation {
|
||||
|
||||
childrenCache.getListenable().addListener(new PathChildrenCacheListener() {
|
||||
|
||||
public void childEvent(CuratorFramework client, PathChildrenCacheEvent event) throws Exception {
|
||||
public void childEvent(CuratorFramework client, PathChildrenCacheEvent event) {
|
||||
switch (event.getType()) {
|
||||
case INITIALIZED:
|
||||
System.out.println("childrenCache初始化完成");
|
||||
|
264
notes/Zookeeper_ACL权限控制.md
Normal file
264
notes/Zookeeper_ACL权限控制.md
Normal file
@ -0,0 +1,264 @@
|
||||
# Zookeeper ACL
|
||||
|
||||
## 一、前言
|
||||
|
||||
为了避免存储在Zookeeper上的数据被其他程序或者人为误修改,Zookeeper提供了ACL(Access Control)进行权限控制。只有拥有对应权限的用户才可以对节点进行增删改查等操作。下文分别介绍使用原生的Shell命令和Apache Curator客户端进行权限设置。
|
||||
|
||||
## 二、使用Shell进行权限管理
|
||||
|
||||
### 2.1 设置与查看权限
|
||||
|
||||
想要给某个节点设置权限(ACL),有以下两个可选的命令:
|
||||
|
||||
```shell
|
||||
# 1.给已有节点赋予权限
|
||||
setAcl path acl
|
||||
|
||||
# 2.在创建节点时候指定权限
|
||||
create [-s] [-e] path data acl
|
||||
```
|
||||
|
||||
查看指定节点的权限命令如下:
|
||||
|
||||
```shell
|
||||
getAcl path
|
||||
```
|
||||
|
||||
### 2.2 权限组成
|
||||
|
||||
Zookeeper的权限由[scheme : id :permissions]三部分组成,其中Schemes和Permissions内置的可选项分别如下:
|
||||
|
||||
Permissions可选项:
|
||||
|
||||
- CREATE:允许创建子节点;
|
||||
- READ:允许从节点获取数据并列出其子节点;
|
||||
- WRITE:允许为节点设置数据;
|
||||
- DELETE:允许删除子节点;
|
||||
- ADMIN:允许为节点设置权限。
|
||||
|
||||
Schemes可选项:
|
||||
|
||||
- world:默认模式, 所有客户端都拥有指定权限。world下只有一个id选项,就是anyone,通常组合写法为`world:anyone:[permissons]`;
|
||||
- auth:只有经过认证的用户, 才拥有指定的权限。通常组合写法为`auth:user:password:[permissons]`,使用这种模式时,你需要先进行登录,之后采用auth模式时,user和password都将使用登录的用户名和密码;
|
||||
- digest:只有经过认证的用户, 才拥有指定的权限。通常组合写法为`auth:user:BASE64(SHA1(password)):[permissons]`,这种形式下的密码必须通过SHA1和BASE64进行双重加密;
|
||||
- ip:限制只有特定IP的客户端才拥有指定的权限。通常组成写法为`ip:182.168.0.168:[permissions]`;
|
||||
- super:代表超级管理员,拥有所有的权限,需要修改Zookeeper启动脚本进行配置。
|
||||
|
||||
|
||||
|
||||
### 2.3 添加认证信息
|
||||
|
||||
可以使用如下所示的命令为当前Session添加用户认证信息,等价于登录操作。
|
||||
|
||||
```shell
|
||||
# 格式
|
||||
addauth scheme auth
|
||||
|
||||
#示例:添加用户名为heibai,密码为root的用户认证信息
|
||||
addauth digest heibai:root
|
||||
```
|
||||
|
||||
|
||||
|
||||
### 2.4 权限设置示例
|
||||
|
||||
#### 1. world模式
|
||||
|
||||
world是一种默认的模式,即创建时如果不指定权限,则默认的权限就是world。
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 32] create /hadoop 123
|
||||
Created /hadoop
|
||||
[zk: localhost:2181(CONNECTED) 33] getAcl /hadoop
|
||||
'world,'anyone #默认的权限
|
||||
: cdrwa
|
||||
[zk: localhost:2181(CONNECTED) 34] setAcl /hadoop world:anyone:cwda # 修改节点,不允许所有客户端读
|
||||
....
|
||||
[zk: localhost:2181(CONNECTED) 35] get /hadoop
|
||||
Authentication is not valid : /hadoop # 权限不足
|
||||
|
||||
```
|
||||
|
||||
#### 2. auth模式
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 36] addauth digest heibai:heibai # 登录
|
||||
[zk: localhost:2181(CONNECTED) 37] setAcl /hadoop auth::cdrwa # 设置权限
|
||||
[zk: localhost:2181(CONNECTED) 38] getAcl /hadoop # 获取权限
|
||||
'digest,'heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s= #用户名和密码(密码经过加密处理),注意返回的权限类型是digest
|
||||
: cdrwa
|
||||
|
||||
#用户名和密码都是使用登录的用户名和密码,即使你在创建权限时候进行指定也是无效的
|
||||
[zk: localhost:2181(CONNECTED) 39] setAcl /hadoop auth:root:root:cdrwa #指定用户名和密码为root
|
||||
[zk: localhost:2181(CONNECTED) 40] getAcl /hadoop
|
||||
'digest,'heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s= #无效,使用的用户名和密码依然还是heibai
|
||||
: cdrwa
|
||||
|
||||
```
|
||||
|
||||
#### 3. digest模式
|
||||
|
||||
```shell
|
||||
[zk:44] create /spark "spark" digest:heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s=:cdrwa #指定用户名和加密后的密码
|
||||
[zk:45] getAcl /spark #获取权限
|
||||
'digest,'heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s= # 返回的权限类型是digest
|
||||
: cdrwa
|
||||
```
|
||||
|
||||
到这里你可以发现使用auth模式设置的权限和使用digest模式设置的权限,在最终结果上,得到的权限模式都是`digest`。某种程度上,你可以把auth模式理解成是digest模式的一种简便实现。因为在digest模式下,每次设置都需要书写用户名和加密后的密码,这是比较繁琐的,采用auth模式,则可以在登录一次后就可以不用重复书写了。
|
||||
|
||||
#### 4. ip模式
|
||||
|
||||
限定只有特定的ip才能访问。
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 46] create /hive "hive" ip:192.168.0.108:cdrwa
|
||||
[zk: localhost:2181(CONNECTED) 47] get /hive
|
||||
Authentication is not valid : /hive # 当前主机已经不能访问
|
||||
```
|
||||
|
||||
这里可以使用限定IP的主机客户端进行访问,也可以使用下面的super模式配置超级管理员进行访问。
|
||||
|
||||
#### 5. super模式
|
||||
|
||||
需要修改启动脚本`zkServer.sh`,在指定位置添加管理员账户和密码信息:
|
||||
|
||||
```shell
|
||||
"-Dzookeeper.DigestAuthenticationProvider.superDigest=heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s="
|
||||
```
|
||||
|
||||

|
||||
|
||||
修改完成后需要使用`zkServer.sh restart`重启服务,此时再次访问限制IP的节点:
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 0] get /hive #访问受限
|
||||
Authentication is not valid : /hive
|
||||
[zk: localhost:2181(CONNECTED) 1] addauth digest heibai:heibai # 登录(添加认证信息)
|
||||
[zk: localhost:2181(CONNECTED) 2] get /hive #成功访问
|
||||
hive
|
||||
cZxid = 0x158
|
||||
ctime = Sat May 25 09:11:29 CST 2019
|
||||
mZxid = 0x158
|
||||
mtime = Sat May 25 09:11:29 CST 2019
|
||||
pZxid = 0x158
|
||||
cversion = 0
|
||||
dataVersion = 0
|
||||
aclVersion = 0
|
||||
ephemeralOwner = 0x0
|
||||
dataLength = 4
|
||||
numChildren = 0
|
||||
```
|
||||
|
||||
## 三、使用Java客户端进行权限管理
|
||||
|
||||
### 3.1 主要依赖
|
||||
|
||||
使用前需要导入curator相关Jar包,完整依赖如下:
|
||||
|
||||
```xml
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.curator</groupId>
|
||||
<artifactId>curator-framework</artifactId>
|
||||
<version>4.0.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.curator</groupId>
|
||||
<artifactId>curator-recipes</artifactId>
|
||||
<version>4.0.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.zookeeper</groupId>
|
||||
<artifactId>zookeeper</artifactId>
|
||||
<version>3.4.13</version>
|
||||
</dependency>
|
||||
<!--单元测试相关依赖-->
|
||||
<dependency>
|
||||
<groupId>junit</groupId>
|
||||
<artifactId>junit</artifactId>
|
||||
<version>4.12</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
### 3.2 权限管理API
|
||||
|
||||
Curator权限设置、修改和查看的API调用示例如下:
|
||||
|
||||
```java
|
||||
public class AclOperation {
|
||||
|
||||
private CuratorFramework client = null;
|
||||
private static final String zkServerPath = "192.168.0.226:2181";
|
||||
private static final String nodePath = "/hadoop/hdfs";
|
||||
|
||||
@Before
|
||||
public void prepare() {
|
||||
RetryPolicy retryPolicy = new RetryNTimes(3, 5000);
|
||||
client = CuratorFrameworkFactory.builder()
|
||||
.authorization("digest", "heibai:123456".getBytes()) //等价于addauth命令
|
||||
.connectString(zkServerPath)
|
||||
.sessionTimeoutMs(10000).retryPolicy(retryPolicy)
|
||||
.namespace("workspace").build();
|
||||
client.start();
|
||||
}
|
||||
|
||||
/**
|
||||
* 新建节点并赋予权限
|
||||
*/
|
||||
@Test
|
||||
public void createNodesWithAcl() throws Exception {
|
||||
List<ACL> aclList = new ArrayList<>();
|
||||
// 对密码进行加密
|
||||
String digest1 = DigestAuthenticationProvider.generateDigest("heibai:123456");
|
||||
String digest2 = DigestAuthenticationProvider.generateDigest("ying:123456");
|
||||
Id user01 = new Id("digest", digest1);
|
||||
Id user02 = new Id("digest", digest2);
|
||||
// 指定所有权限
|
||||
aclList.add(new ACL(Perms.ALL, user01));
|
||||
// 如果想要指定权限的组合,中间需要使用 | ,这里的|代表的是位运算中的 按位或
|
||||
aclList.add(new ACL(Perms.DELETE | Perms.CREATE, user02));
|
||||
|
||||
// 创建节点
|
||||
byte[] data = "abc".getBytes();
|
||||
client.create().creatingParentsIfNeeded()
|
||||
.withMode(CreateMode.PERSISTENT)
|
||||
.withACL(aclList, true)
|
||||
.forPath(nodePath, data);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* 给已有节点设置权限,注意这会删除所有原来节点上已有的权限设置
|
||||
*/
|
||||
@Test
|
||||
public void SetAcl() throws Exception {
|
||||
String digest = DigestAuthenticationProvider.generateDigest("admin:admin");
|
||||
Id user = new Id("digest", digest);
|
||||
client.setACL()
|
||||
.withACL(Collections.singletonList(new ACL(Perms.READ | Perms.DELETE, user)))
|
||||
.forPath(nodePath);
|
||||
}
|
||||
|
||||
/**
|
||||
* 获取权限
|
||||
*/
|
||||
@Test
|
||||
public void getAcl() throws Exception {
|
||||
List<ACL> aclList = client.getACL().forPath(nodePath);
|
||||
ACL acl = aclList.get(0);
|
||||
System.out.println(acl.getId().getId()
|
||||
+ "是否有删读权限:" + (acl.getPerms() == (Perms.READ | Perms.DELETE)));
|
||||
}
|
||||
|
||||
@After
|
||||
public void destroy() {
|
||||
if (client != null) {
|
||||
client.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> 完整源码见本仓库: https://github.com/heibaiying/BigData-Notes/tree/master/code/Zookeeper/curator
|
316
notes/Zookeeper_Java客户端Curator.md
Normal file
316
notes/Zookeeper_Java客户端Curator.md
Normal file
@ -0,0 +1,316 @@
|
||||
# Zookeeper Java 客户端 ——Apache Curator
|
||||
|
||||
## 一、基本依赖
|
||||
|
||||
Curator是Netflix公司开源的一个Zookeeper客户端,目前由Apache进行维护。与Zookeeper原生客户端相比,Curator的抽象层次更高,功能也更加丰富,是Zookeeper使用范围最广的Java客户端。本片文章主要讲解其基本使用,以下项目采用Maven构建,以单元测试的方法进行讲解,相关依赖如下:
|
||||
|
||||
```xml
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.curator</groupId>
|
||||
<artifactId>curator-framework</artifactId>
|
||||
<version>4.0.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.curator</groupId>
|
||||
<artifactId>curator-recipes</artifactId>
|
||||
<version>4.0.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.zookeeper</groupId>
|
||||
<artifactId>zookeeper</artifactId>
|
||||
<version>3.4.13</version>
|
||||
</dependency>
|
||||
<!--单元测试相关依赖-->
|
||||
<dependency>
|
||||
<groupId>junit</groupId>
|
||||
<artifactId>junit</artifactId>
|
||||
<version>4.12</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
> 完整源码见本仓库: https://github.com/heibaiying/BigData-Notes/tree/master/code/Zookeeper/curator
|
||||
|
||||
|
||||
|
||||
## 二、客户端相关操作
|
||||
|
||||
### 2.1 创建客户端实例
|
||||
|
||||
这里使用`@Before`在单元测试执行前创建客户端实例,并使用`@After`在单元测试后关闭客户端连接。
|
||||
|
||||
```java
|
||||
public class BasicOperation {
|
||||
|
||||
private CuratorFramework client = null;
|
||||
private static final String zkServerPath = "192.168.0.226:2181";
|
||||
private static final String nodePath = "/hadoop/yarn";
|
||||
|
||||
@Before
|
||||
public void prepare() {
|
||||
// 重试策略
|
||||
RetryPolicy retryPolicy = new RetryNTimes(3, 5000);
|
||||
client = CuratorFrameworkFactory.builder()
|
||||
.connectString(zkServerPath)
|
||||
.sessionTimeoutMs(10000).retryPolicy(retryPolicy)
|
||||
.namespace("workspace").build(); //指定命名空间后,client的所有路径操作都会以/workspace开头
|
||||
client.start();
|
||||
}
|
||||
|
||||
@After
|
||||
public void destroy() {
|
||||
if (client != null) {
|
||||
client.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 重试策略
|
||||
|
||||
在连接Zookeeper服务时候,Curator提供了多种重试策略以满足各种需求,所有重试策略继承自`RetryPolicy`接口,如下图:
|
||||
|
||||

|
||||
|
||||
而这些重试策略类又分为两大类别:
|
||||
|
||||
+ RetryForever :代表一直重试,直到连接成功;
|
||||
+ SleepingRetry : 基于一定间隔时间的重试,这里以其子类`ExponentialBackoffRetry`为例说明,其构造器如下:
|
||||
|
||||
```java
|
||||
/**
|
||||
* @param baseSleepTimeMs 重试之间等待的初始时间
|
||||
* @param maxRetries 最大重试次数
|
||||
* @param maxSleepMs 每次重试间隔的最长睡眠时间(毫秒)
|
||||
*/
|
||||
ExponentialBackoffRetry(int baseSleepTimeMs, int maxRetries, int maxSleepMs)
|
||||
```
|
||||
### 2.3 判断服务状态
|
||||
|
||||
```scala
|
||||
@Test
|
||||
public void getStatus() {
|
||||
CuratorFrameworkState state = client.getState();
|
||||
System.out.println("服务是否已经启动:" + (state == CuratorFrameworkState.STARTED));
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## 三、节点增删改查
|
||||
|
||||
### 3.1 创建节点
|
||||
|
||||
```java
|
||||
@Test
|
||||
public void createNodes() throws Exception {
|
||||
byte[] data = "abc".getBytes();
|
||||
client.create().creatingParentsIfNeeded()
|
||||
.withMode(CreateMode.PERSISTENT) //节点类型
|
||||
.withACL(ZooDefs.Ids.OPEN_ACL_UNSAFE)
|
||||
.forPath(nodePath, data);
|
||||
}
|
||||
```
|
||||
|
||||
创建时可以指定节点类型,这里的节点类型和Zookeeper原生的一致,全部定义在枚举类`CreateMode`中:
|
||||
|
||||
```java
|
||||
public enum CreateMode {
|
||||
// 永久节点
|
||||
PERSISTENT (0, false, false),
|
||||
//永久有序节点
|
||||
PERSISTENT_SEQUENTIAL (2, false, true),
|
||||
// 临时节点
|
||||
EPHEMERAL (1, true, false),
|
||||
// 临时有序节点
|
||||
EPHEMERAL_SEQUENTIAL (3, true, true);
|
||||
....
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 获取节点信息
|
||||
|
||||
```scala
|
||||
@Test
|
||||
public void getNode() throws Exception {
|
||||
Stat stat = new Stat();
|
||||
byte[] data = client.getData().storingStatIn(stat).forPath(nodePath);
|
||||
System.out.println("节点数据:" + new String(data));
|
||||
System.out.println("节点信息:" + stat.toString());
|
||||
}
|
||||
```
|
||||
|
||||
如上所示,节点信息被封装在`Stat`类中,其主要属性如下:
|
||||
|
||||
```java
|
||||
public class Stat implements Record {
|
||||
private long czxid;
|
||||
private long mzxid;
|
||||
private long ctime;
|
||||
private long mtime;
|
||||
private int version;
|
||||
private int cversion;
|
||||
private int aversion;
|
||||
private long ephemeralOwner;
|
||||
private int dataLength;
|
||||
private int numChildren;
|
||||
private long pzxid;
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
每个属性的含义如下:
|
||||
|
||||
| **状态属性** | **说明** |
|
||||
| -------------- | ------------------------------------------------------------ |
|
||||
| czxid | 数据节点创建时的事务ID |
|
||||
| ctime | 数据节点创建时的时间 |
|
||||
| mzxid | 数据节点最后一次更新时的事务ID |
|
||||
| mtime | 数据节点最后一次更新时的时间 |
|
||||
| pzxid | 数据节点的子节点最后一次被修改时的事务ID |
|
||||
| cversion | 子节点的更改次数 |
|
||||
| version | 节点数据的更改次数 |
|
||||
| aversion | 节点的ACL的更改次数 |
|
||||
| ephemeralOwner | 如果节点是临时节点,则表示创建该节点的会话的SessionID;如果节点是持久节点,则该属性值为0 |
|
||||
| dataLength | 数据内容的长度 |
|
||||
| numChildren | 数据节点当前的子节点个数 |
|
||||
|
||||
### 2.3 获取子节点列表
|
||||
|
||||
```java
|
||||
@Test
|
||||
public void getChildrenNodes() throws Exception {
|
||||
List<String> childNodes = client.getChildren().forPath("/hadoop");
|
||||
for (String s : childNodes) {
|
||||
System.out.println(s);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.4 更新节点
|
||||
|
||||
更新时可以传入版本号也可以不传入,如果传入则类似于乐观锁机制,只有在版本号正确的时候才会被更新。
|
||||
|
||||
```scala
|
||||
@Test
|
||||
public void updateNode() throws Exception {
|
||||
byte[] newData = "defg".getBytes();
|
||||
client.setData().withVersion(0) // 传入版本号,如果版本号错误则拒绝更新操作,并抛出BadVersion异常
|
||||
.forPath(nodePath, newData);
|
||||
}
|
||||
```
|
||||
|
||||
### 2.5 删除节点
|
||||
|
||||
```java
|
||||
@Test
|
||||
public void deleteNodes() throws Exception {
|
||||
client.delete()
|
||||
.guaranteed() // 如果删除失败,那么在会继续执行,直到成功
|
||||
.deletingChildrenIfNeeded() // 如果有子节点,则递归删除
|
||||
.withVersion(0) // 传入版本号,如果版本号错误则拒绝删除操作,并抛出BadVersion异常
|
||||
.forPath(nodePath);
|
||||
}
|
||||
```
|
||||
|
||||
### 2.6 判断节点是否存在
|
||||
|
||||
```java
|
||||
@Test
|
||||
public void existNode() throws Exception {
|
||||
// 如果节点存在则返回其状态信息如果不存在则为null
|
||||
Stat stat = client.checkExists().forPath(nodePath + "aa/bb/cc");
|
||||
System.out.println("节点是否存在:" + !(stat == null));
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## 三、监听事件
|
||||
|
||||
### 3.1 创建一次性监听
|
||||
|
||||
和Zookeeper原生监听一样,使用`usingWatcher`注册的监听是一次性的,即监听只会触发一次,触发后就销毁。示例如下:
|
||||
|
||||
```java
|
||||
@Test
|
||||
public void DisposableWatch() throws Exception {
|
||||
client.getData().usingWatcher(new CuratorWatcher() {
|
||||
public void process(WatchedEvent event) {
|
||||
System.out.println("节点" + event.getPath() + "发生了事件:" + event.getType());
|
||||
}
|
||||
}).forPath(nodePath);
|
||||
Thread.sleep(1000 * 1000); //休眠以观察测试效果
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 创建永久监听
|
||||
|
||||
Curator还提供了创建永久监听的API,其使用方式如下:
|
||||
|
||||
```java
|
||||
@Test
|
||||
public void permanentWatch() throws Exception {
|
||||
// 使用NodeCache包装节点,对其注册的监听作用于节点,且是永久性的
|
||||
NodeCache nodeCache = new NodeCache(client, nodePath);
|
||||
// 通常设置为true, 代表创建nodeCache时,就去获取对应节点的值并缓存
|
||||
nodeCache.start(true);
|
||||
nodeCache.getListenable().addListener(new NodeCacheListener() {
|
||||
public void nodeChanged() {
|
||||
ChildData currentData = nodeCache.getCurrentData();
|
||||
if (currentData != null) {
|
||||
System.out.println("节点路径:" + currentData.getPath() +
|
||||
"数据:" + new String(currentData.getData()));
|
||||
}
|
||||
}
|
||||
});
|
||||
Thread.sleep(1000 * 1000); //休眠以观察测试效果
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 监听子节点
|
||||
|
||||
这里以监听`/hadoop`下所有子节点为例,实现方式如下:
|
||||
|
||||
```scala
|
||||
@Test
|
||||
public void permanentChildrenNodesWatch() throws Exception {
|
||||
|
||||
// 第三个参数代表除了节点状态外,是否还缓存节点内容
|
||||
PathChildrenCache childrenCache = new PathChildrenCache(client, "/hadoop", true);
|
||||
/*
|
||||
* StartMode代表初始化方式:
|
||||
* NORMAL: 异步初始化
|
||||
* BUILD_INITIAL_CACHE: 同步初始化
|
||||
* POST_INITIALIZED_EVENT: 异步并通知,初始化之后会触发INITIALIZED事件
|
||||
*/
|
||||
childrenCache.start(StartMode.POST_INITIALIZED_EVENT);
|
||||
|
||||
List<ChildData> childDataList = childrenCache.getCurrentData();
|
||||
System.out.println("当前数据节点的子节点列表:");
|
||||
childDataList.forEach(x -> System.out.println(x.getPath()));
|
||||
|
||||
childrenCache.getListenable().addListener(new PathChildrenCacheListener() {
|
||||
public void childEvent(CuratorFramework client, PathChildrenCacheEvent event) {
|
||||
switch (event.getType()) {
|
||||
case INITIALIZED:
|
||||
System.out.println("childrenCache初始化完成");
|
||||
break;
|
||||
case CHILD_ADDED:
|
||||
// 需要注意的是: 即使是之前已经存在的子节点,也会触发该监听,因为会把该子节点加入childrenCache缓存中
|
||||
System.out.println("增加子节点:" + event.getData().getPath());
|
||||
break;
|
||||
case CHILD_REMOVED:
|
||||
System.out.println("删除子节点:" + event.getData().getPath());
|
||||
break;
|
||||
case CHILD_UPDATED:
|
||||
System.out.println("被修改的子节点的路径:" + event.getData().getPath());
|
||||
System.out.println("修改后的数据:" + new String(event.getData().getData()));
|
||||
break;
|
||||
}
|
||||
}
|
||||
});
|
||||
Thread.sleep(1000 * 1000); //休眠以观察测试效果
|
||||
}
|
||||
```
|
@ -102,9 +102,9 @@ numChildren = 0
|
||||
| mZxid | 数据节点最后一次更新时的事务ID |
|
||||
| mtime | 数据节点最后一次更新时的时间 |
|
||||
| pZxid | 数据节点的子节点最后一次被修改时的事务ID |
|
||||
| cversion | 子节点的版本号 |
|
||||
| dataVersion | 数据节点的版本号 |
|
||||
| aclVersion | 数据节点的ACL版本号 |
|
||||
| cversion | 子节点的更改次数 |
|
||||
| dataVersion | 节点数据的更改次数 |
|
||||
| aclVersion | 节点的ACL的更改次数 |
|
||||
| ephemeralOwner | 如果节点是临时节点,则表示创建该节点的会话的SessionID;如果节点是持久节点,则该属性值为0 |
|
||||
| dataLength | 数据内容的长度 |
|
||||
| numChildren | 数据节点当前的子节点个数 |
|
||||
@ -211,153 +211,7 @@ WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/hadoop
|
||||
|
||||
|
||||
|
||||
## 三、权限管理
|
||||
|
||||
### 3.1 设置与查看权限
|
||||
|
||||
想要给某个节点设置权限(ACL),有以下两个可选的命令:
|
||||
|
||||
```shell
|
||||
# 1.给已有节点赋予权限
|
||||
setAcl path acl
|
||||
|
||||
# 2.在创建节点时候指定权限
|
||||
create [-s] [-e] path data acl
|
||||
```
|
||||
|
||||
查看指定节点的权限命令如下:
|
||||
|
||||
```shell
|
||||
getAcl path
|
||||
```
|
||||
|
||||
### 3.2 权限组成
|
||||
|
||||
Zookeeper的权限由[scheme : id :permissions]三部分组成,其中Schemes和Permissions内置的可选项分别如下:
|
||||
|
||||
Permissions可选项:
|
||||
|
||||
- CREATE:允许创建子节点;
|
||||
- READ:允许从节点获取数据并列出其子节点;
|
||||
- WRITE:允许为节点设置数据;
|
||||
- DELETE:允许删除子节点;
|
||||
- ADMIN:允许为节点设置权限。
|
||||
|
||||
Schemes可选项:
|
||||
|
||||
- world:默认模式, 所有客户端都拥有指定权限。world下只有一个id选项,就是anyone,通常组合写法为`world:anyone:[permissons]`;
|
||||
- auth:只有经过认证的用户, 才拥有指定的权限。通常组合写法为`auth:user:password:[permissons]`,使用这种模式时,你需要先进行登录,之后采用auth模式时,user和password都将使用登录的用户名和密码;
|
||||
- digest:只有经过认证的用户, 才拥有指定的权限。通常组合写法为`auth:user:BASE64(SHA1(password)):[permissons]`,这种形式下的密码必须通过SHA1和BASE64进行双重加密;
|
||||
- ip:限制只有特定IP的客户端才拥有指定的权限。通常组成写法为`ip:182.168.0.168:[permissions]`;
|
||||
- super:代表超级管理员,拥有所有的权限,需要修改Zookeeper启动脚本进行配置。
|
||||
|
||||
|
||||
|
||||
### 3.3 添加认证信息
|
||||
|
||||
可以使用如下所示的命令为当前Session添加用户认证信息,等价于登录操作。
|
||||
|
||||
```shell
|
||||
# 格式
|
||||
addauth scheme auth
|
||||
|
||||
#示例:添加用户名为heibai,密码为root的用户认证信息
|
||||
addauth digest heibai:root
|
||||
```
|
||||
|
||||
|
||||
|
||||
### 3.4 权限设置示例
|
||||
|
||||
#### 1. world模式
|
||||
|
||||
world是一种默认的模式,即创建时如果不指定权限,则默认的权限就是world。
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 32] create /hadoop 123
|
||||
Created /hadoop
|
||||
[zk: localhost:2181(CONNECTED) 33] getAcl /hadoop
|
||||
'world,'anyone #默认的权限
|
||||
: cdrwa
|
||||
[zk: localhost:2181(CONNECTED) 34] setAcl /hadoop world:anyone:cwda # 修改节点,不允许所有客户端读
|
||||
....
|
||||
[zk: localhost:2181(CONNECTED) 35] get /hadoop
|
||||
Authentication is not valid : /hadoop # 权限不足
|
||||
|
||||
```
|
||||
|
||||
#### 2. auth模式
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 36] addauth digest heibai:heibai # 登录
|
||||
[zk: localhost:2181(CONNECTED) 37] setAcl /hadoop auth::cdrwa # 设置权限
|
||||
[zk: localhost:2181(CONNECTED) 38] getAcl /hadoop # 获取权限
|
||||
'digest,'heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s= #用户名和密码(密码经过加密处理),注意返回的权限类型是digest
|
||||
: cdrwa
|
||||
|
||||
#用户名和密码都是使用登录的用户名和密码,即使你在创建权限时候进行指定也是无效的
|
||||
[zk: localhost:2181(CONNECTED) 39] setAcl /hadoop auth:root:root:cdrwa #指定用户名和密码为root
|
||||
[zk: localhost:2181(CONNECTED) 40] getAcl /hadoop
|
||||
'digest,'heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s= #无效,使用的用户名和密码依然还是heibai
|
||||
: cdrwa
|
||||
|
||||
```
|
||||
|
||||
#### 3. digest模式
|
||||
|
||||
```shell
|
||||
[zk:44] create /spark "spark" digest:heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s=:cdrwa #指定用户名和加密后的密码
|
||||
[zk:45] getAcl /spark #获取权限
|
||||
'digest,'heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s= # 返回的权限类型是digest
|
||||
: cdrwa
|
||||
```
|
||||
|
||||
到这里你可以发现使用auth模式设置的权限和使用digest模式设置的权限,在最终结果上,得到的权限模式都是`digest`。某种程度上,你可以把auth模式理解成是digest模式的一种简便实现。因为在digest模式下,每次设置都需要书写用户名和加密后的密码,这是比较繁琐的,采用auth模式,则可以在登录一次后就可以不用重复书写了。
|
||||
|
||||
#### 4. ip模式
|
||||
|
||||
限定只有特定的ip才能访问。
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 46] create /hive "hive" ip:192.168.0.108:cdrwa
|
||||
[zk: localhost:2181(CONNECTED) 47] get /hive
|
||||
Authentication is not valid : /hive # 当前主机已经不能访问
|
||||
```
|
||||
|
||||
这里可以使用限定IP的主机客户端进行访问,也可以使用下面的super模式配置超级管理员进行访问。
|
||||
|
||||
#### 5. super模式
|
||||
|
||||
需要修改启动脚本`zkServer.sh`,在指定位置添加管理员账户和密码信息:
|
||||
|
||||
```shell
|
||||
"-Dzookeeper.DigestAuthenticationProvider.superDigest=heibai:sCxtVJ1gPG8UW/jzFHR0A1ZKY5s="
|
||||
```
|
||||
|
||||

|
||||
|
||||
修改完成后需要使用`zkServer.sh restart`重启服务,此时再次访问限制IP的节点:
|
||||
|
||||
```shell
|
||||
[zk: localhost:2181(CONNECTED) 0] get /hive #访问受限
|
||||
Authentication is not valid : /hive
|
||||
[zk: localhost:2181(CONNECTED) 1] addauth digest heibai:heibai # 登录(添加认证信息)
|
||||
[zk: localhost:2181(CONNECTED) 2] get /hive #成功访问
|
||||
hive
|
||||
cZxid = 0x158
|
||||
ctime = Sat May 25 09:11:29 CST 2019
|
||||
mZxid = 0x158
|
||||
mtime = Sat May 25 09:11:29 CST 2019
|
||||
pZxid = 0x158
|
||||
cversion = 0
|
||||
dataVersion = 0
|
||||
aclVersion = 0
|
||||
ephemeralOwner = 0x0
|
||||
dataLength = 4
|
||||
numChildren = 0
|
||||
```
|
||||
|
||||
## 四、 zookeeper 四字命令
|
||||
## 三、 zookeeper 四字命令
|
||||
|
||||
| 命令 | 功能描述 |
|
||||
| ---- | ------------------------------------------------------------ |
|
||||
|
BIN
pictures/curator-retry-policy.png
Normal file
BIN
pictures/curator-retry-policy.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 12 KiB |
Loading…
x
Reference in New Issue
Block a user