diff --git a/Hadoop-Benchmark.md b/Hadoop-Benchmark.md
index f4c5099..991bd63 100644
--- a/Hadoop-Benchmark.md
+++ b/Hadoop-Benchmark.md
@@ -26,7 +26,7 @@ Then get the seaweedfs hadoop client jar.
```
cd share/hadoop/common/lib/
-wget https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/3.30/seaweedfs-hadoop2-client-3.30.jar
+wget https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/3.33/seaweedfs-hadoop2-client-3.33.jar
```
# TestDFSIO Benchmark
diff --git a/Hadoop-Compatible-File-System.md b/Hadoop-Compatible-File-System.md
index 26a73cf..8234578 100644
--- a/Hadoop-Compatible-File-System.md
+++ b/Hadoop-Compatible-File-System.md
@@ -10,12 +10,12 @@ $ mvn install
# build for hadoop2
$cd $GOPATH/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2
$ mvn package
-$ ls -al target/seaweedfs-hadoop2-client-3.30.jar
+$ ls -al target/seaweedfs-hadoop2-client-3.33.jar
# build for hadoop3
$cd $GOPATH/src/github.com/seaweedfs/seaweedfs/other/java/hdfs3
$ mvn package
-$ ls -al target/seaweedfs-hadoop3-client-3.30.jar
+$ ls -al target/seaweedfs-hadoop3-client-3.33.jar
```
Maven
@@ -23,7 +23,7 @@ Maven
com.github.chrislusf
seaweedfs-hadoop3-client
- 3.30
+ 3.33
or
@@ -31,23 +31,23 @@ or
com.github.chrislusf
seaweedfs-hadoop2-client
- 3.30
+ 3.33
```
Or you can download the latest version from MavenCentral
* https://mvnrepository.com/artifact/com.github.chrislusf/seaweedfs-hadoop2-client
- * [seaweedfs-hadoop2-client-3.30.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/3.30/seaweedfs-hadoop2-client-3.30.jar)
+ * [seaweedfs-hadoop2-client-3.33.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/3.33/seaweedfs-hadoop2-client-3.33.jar)
* https://mvnrepository.com/artifact/com.github.chrislusf/seaweedfs-hadoop3-client
- * [seaweedfs-hadoop3-client-3.30.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop3-client/3.30/seaweedfs-hadoop3-client-3.30.jar)
+ * [seaweedfs-hadoop3-client-3.33.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop3-client/3.33/seaweedfs-hadoop3-client-3.33.jar)
# Test SeaweedFS on Hadoop
Suppose you are getting a new Hadoop installation. Here are the minimum steps to get SeaweedFS to run.
-You would need to start a weed filer first, build the seaweedfs-hadoop2-client-3.30.jar
-or seaweedfs-hadoop3-client-3.30.jar, and do the following:
+You would need to start a weed filer first, build the seaweedfs-hadoop2-client-3.33.jar
+or seaweedfs-hadoop3-client-3.33.jar, and do the following:
```
# optionally adjust hadoop memory allocation
@@ -60,12 +60,12 @@ $ echo "" > etc/hadoop/mapred-site.xml
# on hadoop2
$ bin/hdfs dfs -Dfs.defaultFS=seaweedfs://localhost:8888 \
-Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem \
- -libjars ./seaweedfs-hadoop2-client-3.30.jar \
+ -libjars ./seaweedfs-hadoop2-client-3.33.jar \
-ls /
# or on hadoop3
$ bin/hdfs dfs -Dfs.defaultFS=seaweedfs://localhost:8888 \
-Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem \
- -libjars ./seaweedfs-hadoop3-client-3.30.jar \
+ -libjars ./seaweedfs-hadoop3-client-3.33.jar \
-ls /
```
@@ -112,9 +112,9 @@ $ bin/hadoop classpath
# Copy SeaweedFS HDFS client jar to one of the folders
$ cd ${HADOOP_HOME}
# for hadoop2
-$ cp ./seaweedfs-hadoop2-client-3.30.jar share/hadoop/common/lib/
+$ cp ./seaweedfs-hadoop2-client-3.33.jar share/hadoop/common/lib/
# or for hadoop3
-$ cp ./seaweedfs-hadoop3-client-3.30.jar share/hadoop/common/lib/
+$ cp ./seaweedfs-hadoop3-client-3.33.jar share/hadoop/common/lib/
```
Now you can do this:
diff --git a/Run-Presto-on-SeaweedFS.md b/Run-Presto-on-SeaweedFS.md
index bfd0f0f..d13e1d8 100644
--- a/Run-Presto-on-SeaweedFS.md
+++ b/Run-Presto-on-SeaweedFS.md
@@ -5,10 +5,10 @@ The installation steps are divided into 2 steps:
* https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration
### Configure Hive Metastore to support SeaweedFS
-1. Copy the seaweedfs-hadoop2-client-3.30.jar to hive lib directory,for example:
+1. Copy the seaweedfs-hadoop2-client-3.33.jar to hive lib directory,for example:
```
-cp seaweedfs-hadoop2-client-3.30.jar /opt/hadoop/share/hadoop/common/lib/
-cp seaweedfs-hadoop2-client-3.30.jar /opt/hive-metastore/lib/
+cp seaweedfs-hadoop2-client-3.33.jar /opt/hadoop/share/hadoop/common/lib/
+cp seaweedfs-hadoop2-client-3.33.jar /opt/hive-metastore/lib/
```
2. Modify core-site.xml
modify core-site.xml to support SeaweedFS, 30888 is the filer port
@@ -50,9 +50,9 @@ metastore.thrift.port is the access port exposed by the Hive Metadata service it
Follow instructions for installation of Presto:
* https://prestosql.io/docs/current/installation/deployment.html
### Configure Presto to support SeaweedFS
-1. Copy the seaweedfs-hadoop2-client-3.30.jar to Presto directory,for example:
+1. Copy the seaweedfs-hadoop2-client-3.33.jar to Presto directory,for example:
```
-cp seaweedfs-hadoop2-client-3.30.jar /opt/presto-server-347/plugin/hive-hadoop2/
+cp seaweedfs-hadoop2-client-3.33.jar /opt/presto-server-347/plugin/hive-hadoop2/
```
2. Modify core-site.xml
diff --git a/SeaweedFS-Java-Client.md b/SeaweedFS-Java-Client.md
index de6ede2..aede495 100644
--- a/SeaweedFS-Java-Client.md
+++ b/SeaweedFS-Java-Client.md
@@ -13,7 +13,7 @@ $ mvn install
Gradle
```gradle
-implementation 'com.github.chrislusf:seaweedfs-client:3.30'
+implementation 'com.github.chrislusf:seaweedfs-client:3.33'
```
Maven
@@ -21,7 +21,7 @@ Maven
com.github.chrislusf
seaweedfs-client
- 3.30
+ 3.33
```
Or you can download the latest version from MavenCentral
diff --git a/run-HBase-on-SeaweedFS.md b/run-HBase-on-SeaweedFS.md
index a0e14c0..cb99357 100644
--- a/run-HBase-on-SeaweedFS.md
+++ b/run-HBase-on-SeaweedFS.md
@@ -1,7 +1,7 @@
# Installation for HBase
Two steps to run HBase on SeaweedFS
-1. Copy the seaweedfs-hadoop2-client-3.30.jar to `${HBASE_HOME}/lib`
+1. Copy the seaweedfs-hadoop2-client-3.33.jar to `${HBASE_HOME}/lib`
1. And add the following 2 properties in `${HBASE_HOME}/conf/hbase-site.xml`
```
diff --git a/run-Spark-on-SeaweedFS.md b/run-Spark-on-SeaweedFS.md
index ccec0cc..39b987c 100644
--- a/run-Spark-on-SeaweedFS.md
+++ b/run-Spark-on-SeaweedFS.md
@@ -11,12 +11,12 @@ To make these files visible to Spark, set HADOOP_CONF_DIR in $SPARK_HOME/conf/sp
## installation not inheriting from Hadoop cluster configuration
-Copy the seaweedfs-hadoop2-client-3.30.jar to all executor machines.
+Copy the seaweedfs-hadoop2-client-3.33.jar to all executor machines.
Add the following to spark/conf/spark-defaults.conf on every node running Spark
```
-spark.driver.extraClassPath=/path/to/seaweedfs-hadoop2-client-3.30.jar
-spark.executor.extraClassPath=/path/to/seaweedfs-hadoop2-client-3.30.jar
+spark.driver.extraClassPath=/path/to/seaweedfs-hadoop2-client-3.33.jar
+spark.executor.extraClassPath=/path/to/seaweedfs-hadoop2-client-3.33.jar
```
And modify the configuration at runtime:
@@ -37,8 +37,8 @@ And modify the configuration at runtime:
1. change the spark-defaults.conf
```
-spark.driver.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.30.jar
-spark.executor.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.30.jar
+spark.driver.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.33.jar
+spark.executor.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.33.jar
spark.hadoop.fs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem
```
@@ -81,8 +81,8 @@ spark.history.fs.cleaner.enabled=true
spark.history.fs.logDirectory=seaweedfs://localhost:8888/spark2-history/
spark.eventLog.dir=seaweedfs://localhost:8888/spark2-history/
-spark.driver.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.30.jar
-spark.executor.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.30.jar
+spark.driver.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.33.jar
+spark.executor.extraClassPath=/Users/chris/go/src/github.com/seaweedfs/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-3.33.jar
spark.hadoop.fs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem
spark.hadoop.fs.defaultFS=seaweedfs://localhost:8888
```