Hbase 总结

更新时间:2024-01-19 16:15:01 阅读量: 教育文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

Hbase的安装与配置

2015年5月16日 10:44

[1]安装jdk(用户:root)

新建文件夹“/usr/share/java_1.6”,把jdk-6u45-linux-x64.bin上传至“/usr/share/java_1.6”文件夹下。 执行命令:

cd /usr/share/java_1.6

chmod +x jdk-6u45-linux-x64.bin ./jdk-6u45-linux-x64.bin

[2]添加Java环境变量(用户:etl)

修改“/home/etl/.bash_profile”,加上以下三句配置: export JAVA_HOME=/usr/share/java_1.6/jdk1.6.0_45 export PATH=$JAVA_HOME/bin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

[3]安装hbase(用户:etl)

把hbase-0.98.7-hadoop2-bin.tar.gz上传至“/home/etl/_jyy/” 执行命令:

cd /home/etl/_jyy/

tar xfz hbase-0.98.7-hadoop2-bin.tar.gz

[4]配置hbase(用户:etl,需要手工新建以下两个文件夹)

修改/home/etl/_jyy/hbase-0.98.7/conf/hbase-site.xml,配置如下:

hbase.rootdir

/home/etl/_jyy/BIDATA/hadoop/hbase_data/hbase/

hbase.zookeeper.property.dataDir

/home/etl/_jyy/BIDATA/hadoop/hbase_data/zookeeper/

修改“/home/etl/.bash_profile”,加上以下配置:

alias hbase=\

[5]启动hbase(用户:etl)

cd /home/etl/_jyy/hbase-0.98.7-hadoop2/bin/

./start-hbase.sh

[6]停止hbase(用户:etl)

修改“/hbase/hbase-0.98.7-hadoop2/conf/hbase-env.sh”的以下配置,并建文件夹。 export HBASE_PID_DIR=/hbase/hbase-0.98.7-hadoop2/pids

cd /hbase/hbase-0.98.7-hadoop2/bin/ ./stop-hbase.sh

Hbase 表操作-DDL

2015年5月16日 10:51

【Oracle的模型】

模型名:DM_集团拍照TOP35客户(TM_CORP_SNMBR_TOP35_M) Erwin:广东移动市公司数据集市项目-物理模型-广州.ER1 Subject Area:

====================================================================================================

序号| 字段英文名 | 数据类型 |PK | NULL |字段中文名 |备注说明

====================================================================================================

1 |STAT_MO |NUMBER(10) |Yes|NOT NULL|统计月份 | 2 |LOC_LVL1_CD |VARCHAR2(20) |Yes|NOT NULL|归属层次1 | 3 |DATA_TYP_CD |NUMBER(10) |Yes|NOT NULL|数据类型编码 | 4 |SNAP_USR_CNT |NUMBER(14) |No |NULL |拍照用户数 |指标值

5 |RETN_USR_CNT |NUMBER(14) |No |NULL |保有客户数 | 6 |SNAP_ARPU |NUMBER(16,4) |No |NULL |拍照ARPU | 7 |RETN_ARPU |NUMBER(16,4) |No |NULL |保有ARPU | 8 |G4_PNTRN_RT |NUMBER(14,4) |No |NULL |4G渗透率 | 9 |BIND_PNTRN_RT |NUMBER(14,4) |No |NULL |捆绑渗透率 | 10 |SPAY_PNTRN_RT |NUMBER(14,4) |No |NULL |统付渗透率 | ====================================================================================================

【测试机环境】 测试机地址:10.200.1.5 用户名:hadoop 密码:123456

在shell命令行执行1-5步: [1]进入命令行:hbase shell [hadoop@bogon ~]$ hbase shell

2014-11-17 10:39:16,520 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

HBase Shell; enter 'help' for list of supported commands. Type \

Version 0.98.3-hadoop2, rd5e65a9144e315bb0a964e7730871af32f5018d5, Sat May 31 19:56:09 PDT 2014

hbase(main):001:0>

[2]把表设置成失效状态:disable 'TM_CORP_SNMBR_TOP35_M'

hbase(main):001:0> disable 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 10:59:04,516 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 2.2420 seconds

[3]删除表:drop 'TM_CORP_SNMBR_TOP35_M' 分两步:首先disable t1,然后drop t1

例如:删除表t1

hbase(main):001:0> drop 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 10:59:54,311 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 1.2140 seconds

[4]建表:create 'TM_CORP_SNMBR_TOP35_M','CF'

hbase(main):001:0> create 'TM_CORP_SNMBR_TOP35_M','CF'

2014-11-17 11:01:22,580 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 1.0910 seconds

=> Hbase::Table - TM_CORP_SNMBR_TOP35_M

create 'test_hadoop','m_id','address','info' [5]删除一个列族m_id:

先将表disable(如果表中之前有数据的话,会把数据清空) hbase(main):030:0> disable 'member'

hbase(main):033:0> is_enabled 'member'

hbase(main):034:0> alter 'member',{NAME=>'m_id',METHOD=>'delete'} [6]查看是否存在表:list 'TM_CORP_SNMBR_TOP35_M'

hbase(main):002:0> list 'TM_CORP_SNMBR_TOP35_M'

TABLE

TM_CORP_SNMBR_TOP35_M 1 row(s) in 0.1010 seconds

=> [\

退出shell命令行执行6-9步:

[6]造数:《TM_CORP_SNMBR_TOP35_M.txt》 201410,GZ01,1,100,200,1.1,1.1,1.1,1.1,1.1 201411,GZ02,1,100,200,1.1,1.1,1.1,1.1,1.1

put插入几条记录

put'test_hadoop','scutshuxue','info:age','24'

put'test_hadoop','scutshuxue','info:birthday','1987-06-17' put'test_hadoop','scutshuxue','info:company','alibaba'

[7]删除输出文件夹的文件:rm -rf /BIDATA/hadoop/jyy/output [8]生成HFILE文件

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=CF:STAT_MO,CF:LOC_LVL1_CD,CF:DATA_TYP_CD,CF:SNAP_USR_CNT,CF:RETN_USR_CNT,CF:SNAP_ARPU,CF:RETN_ARPU,CF:G4_PNTRN_RT,CF:BIND_PNTRN_RT,CF:SPAY_PNTRN_RT -Dimporttsv.rowkey.columns=CF:STAT_MO '-Dimporttsv.separator=,' -Dimporttsv.mapper.class=org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper -Dimporttsv.bulk.output=/BIDATA/hadoop/jyy/output TM_CORP_SNMBR_TOP35_M

/BIDATA/hadoop/jyy/input/TM_CORP_SNMBR_TOP35_M.txt

[hadoop@bogon ~]$ hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=CF:STAT_MO,CF:LOC_LVL1_CD,CF:DATA_TYP_CD,CF:SNAP_USR_CNT,CF:RETN_USR_CNT,CF:SNAP_ARPU,CF:RETN_ARPU,CF:G4_PNTRN_RT,CF:BIND_PNTRN_RT,CF:SPAY_PNTRN_RT -Dimporttsv.rowkey.columns=CF:STAT_MO '-Dimporttsv.separator=,' -Dimporttsv.mapper.class=org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper -Dimporttsv.bulk.output=/BIDATA/hadoop/jyy/output TM_CORP_SNMBR_TOP35_M

/BIDATA/hadoop/jyy/input/TM_CORP_SNMBR_TOP35_M.txt

2014-11-17 11:30:48,411 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client

environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=bogon

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_60

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/BIDATA/hadoop/jdk7/jre

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client

environment:java.class.path=/BIDATA/hadoop/hbase/conf:/BIDATA/hadoop/jdk7/lib/tools.jar:/BIDATA/hadoop/hbase:/BIDATA/hadoop/hbase/lib/activation-1.1.jar:/BIDATA/hadoop/hbase/lib/aopalliance-1.0.jar:/BIDATA/hadoop/hbase/lib/asm-3.1.jar:/BIDATA/hadoop/hbase/lib/avro-1.7.4.jar:/BIDATA/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/BIDATA/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/BIDATA/hadoop/hbase/lib/commons-cli-1.2.jar:/BIDATA/hadoop/hbase/lib/commons-codec-1.7.jar:/BIDATA/hadoop/hbase/lib/commons-collections-3.2.1.jar:/BIDATA/hadoop/hbase/lib/commons-compress-1.4.1.jar:/BIDATA/hadoop/hbase/lib/commons-configuration-1.6.jar:/BIDATA/hadoop/hbase/lib/commons-daemon-1.0.13.jar:/BIDATA/hadoop/hbase/lib/commons-digester-1.8.jar:/BIDATA/hadoop/hbase/lib/commons-el-1.0.jar:/BIDATA/hadoop/hbase/lib/commons-httpclient-3.1.jar:/BIDATA/hadoop/hbase/lib/commons-io-2.4.jar:/BIDATA/hadoop/hbase/lib/commons-lang-2.6.jar:/BIDATA/hadoop/hbase/lib/commons-logging-1.1.1.jar:/BIDATA/hadoop/hbase/lib/commons-math-2.1.jar:/BIDATA/hadoop/hbase/lib/commons-net-3.1.jar:/BIDATA/hadoop/hbase/lib/findbugs-annotations-1.3.9-1.jar:/BIDATA/hadoop/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/BIDATA/hadoop/hbase/lib/grizzly-framework-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-server-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-servlet-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-rcm-2.1.2.jar:/BIDATA/hadoop/hbase/lib/guava-12.0.1.jar:/BIDATA/hadoop/hbase/lib/guice-3.0.jar:/BIDATA/hadoop/hbase/lib/guice-servlet-3.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-annotations-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-auth-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-client-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-common-

2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-hdfs-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-hdfs-2.2.0-tests.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-app-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-common-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-shuffle-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-api-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-client-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-common-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-server-common-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-server-nodemanager-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hamcrest-core-1.3.jar:/BIDATA/hadoop/hbase/lib/hbase-client-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2-tests.jar:/BIDATA/hadoop/hbase/lib/hbase-examples-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-hadoop2-compat-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-hadoop-compat-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-it-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-it-0.98.3-hadoop2-tests.jar:/BIDATA/hadoop/hbase/lib/hbase-prefix-tree-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-protocol-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2-tests.jar:/BIDATA/hadoop/hbase/lib/hbase-shell-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-testing-util-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-thrift-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/BIDATA/hadoop/hbase/lib/htrace-core-2.04.jar:/BIDATA/hadoop/hbase/lib/httpclient-4.1.3.jar:/BIDATA/hadoop/hbase/lib/httpcore-4.1.3.jar:/BIDATA/hadoop/hbase/lib/jackson-core-asl-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jackson-jaxrs-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jackson-mapper-asl-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jackson-xc-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/BIDATA/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/BIDATA/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/BIDATA/hadoop/hbase/lib/javax.inject-1.jar:/BIDATA/hadoop/hbase/lib/javax.servlet-3.1.jar:/BIDATA/hadoop/hbase/lib/javax.servlet-api-3.0.1.jar:/BIDATA/hadoop/hbase/lib/jaxb-api-2.2.2.jar:/BIDATA/hadoop/hbase/lib/jaxb-impl-2.2.3-1.jar:/BIDATA/hadoop/hbase/lib/jersey-client-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-core-

1.8.jar:/BIDATA/hadoop/hbase/lib/jersey-grizzly2-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-guice-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-json-1.8.jar:/BIDATA/hadoop/hbase/lib/jersey-server-1.8.jar:/BIDATA/hadoop/hbase/lib/jersey-test-framework-core-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-test-framework-grizzly2-1.9.jar:/BIDATA/hadoop/hbase/lib/jets3t-0.6.1.jar:/BIDATA/hadoop/hbase/lib/jettison-1.3.1.jar:/BIDATA/hadoop/hbase/lib/jetty-6.1.26.jar:/BIDATA/hadoop/hbase/lib/jetty-sslengine-6.1.26.jar:/BIDATA/hadoop/hbase/lib/jetty-util-6.1.26.jar:/BIDATA/hadoop/hbase/lib/jruby-complete-1.6.8.jar:/BIDATA/hadoop/hbase/lib/jsch-0.1.42.jar:/BIDATA/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/BIDATA/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/BIDATA/hadoop/hbase/lib/jsr305-1.3.9.jar:/BIDATA/hadoop/hbase/lib/junit-4.11.jar:/BIDATA/hadoop/hbase/lib/libthrift-0.9.0.jar:/BIDATA/hadoop/hbase/lib/log4j-1.2.17.jar:/BIDATA/hadoop/hbase/lib/management-api-3.0.0-b012.jar:/BIDATA/hadoop/hbase/lib/metrics-core-2.1.2.jar:/BIDATA/hadoop/hbase/lib/netty-3.6.6.Final.jar:/BIDATA/hadoop/hbase/lib/paranamer-2.3.jar:/BIDATA/hadoop/hbase/lib/protobuf-java-2.5.0.jar:/BIDATA/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/BIDATA/hadoop/hbase/lib/slf4j-api-1.6.4.jar:/BIDATA/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar:/BIDATA/hadoop/hbase/lib/snappy-java-1.0.4.1.jar:/BIDATA/hadoop/hbase/lib/xmlenc-0.52.jar:/BIDATA/hadoop/hbase/lib/xz-1.0.jar:/BIDATA/hadoop/hbase/lib/zookeeper-3.4.6.jar:

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client

environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-71.el6.x86_64

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/BIDATA/hadoop

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/BIDATA/hadoop

2014-11-17 11:30:48,516 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000

watcher=hconnection-0x15d0e248, quorum=localhost:2181, baseZNode=/hbase 2014-11-17 11:30:48,534 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x15d0e248 connecting to ZooKeeper ensemble=localhost:2181

2014-11-17 11:30:48,536 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Opening socket connection to server

localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

2014-11-17 11:30:48,540 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session

2014-11-17 11:30:48,545 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Session establishment complete on server

localhost.localdomain/127.0.0.1:2181, sessionid = 0x1495590492b0059, negotiated timeout = 40000

2014-11-17 11:30:48,828 INFO [main] Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name

2014-11-17 11:30:48,842 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000

watcher=catalogtracker-on-hconnection-0x15d0e248, quorum=localhost:2181, baseZNode=/hbase

2014-11-17 11:30:48,842 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x15d0e248 connecting to ZooKeeper ensemble=localhost:2181

2014-11-17 11:30:48,843 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Opening socket connection to server

localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

2014-11-17 11:30:48,843 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session

2014-11-17 11:30:48,844 DEBUG [main] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@18218b5

2014-11-17 11:30:48,844 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Session establishment complete on server

localhost.localdomain/127.0.0.1:2181, sessionid = 0x1495590492b005a, negotiated timeout = 40000

2014-11-17 11:30:48,874 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

2014-11-17 11:30:49,337 DEBUG [main] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@18218b5 2014-11-17 11:30:49,338 INFO [main] zookeeper.ZooKeeper: Session: 0x1495590492b005a closed

2014-11-17 11:30:49,338 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down

2014-11-17 11:30:49,356 INFO [main] mapreduce.HFileOutputFormat2: Looking up current regions for table TM_CORP_SNMBR_TOP35_M

2014-11-17 11:30:49,364 INFO [main] mapreduce.HFileOutputFormat2: Configuring 1 reduce partitions to match current region count

2014-11-17 11:30:49,365 INFO [main] mapreduce.HFileOutputFormat2: Writing partition information to /tmp/partitions_572ed8a5-d06b-40d3-ab48-cccb859412c4 2014-11-17 11:30:49,427 INFO [main] compress.CodecPool: Got brand-new compressor [.deflate]

2014-11-17 11:30:49,500 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.HConstants, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,501 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /BIDATA/hadoop/hbase/lib/hbase-protocol-0.98.3-hadoop2.jar

2014-11-17 11:30:49,502 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.client.Put, using jar /BIDATA/hadoop/hbase/lib/hbase-client-0.98.3-hadoop2.jar

2014-11-17 11:30:49,503 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.CompatibilityFactory, using jar

/BIDATA/hadoop/hbase/lib/hbase-hadoop-compat-0.98.3-hadoop2.jar

2014-11-17 11:30:49,503 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2.jar

2014-11-17 11:30:49,504 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /BIDATA/hadoop/hbase/lib/zookeeper-3.4.6.jar

2014-11-17 11:30:49,505 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.jboss.netty.channel.ChannelFactory, using jar /BIDATA/hadoop/hbase/lib/netty-3.6.6.Final.jar

2014-11-17 11:30:49,505 DEBUG [main] mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /BIDATA/hadoop/hbase/lib/protobuf-java-2.5.0.jar

2014-11-17 11:30:49,506 DEBUG [main] mapreduce.TableMapReduceUtil: For class com.google.common.collect.Lists, using jar /BIDATA/hadoop/hbase/lib/guava-12.0.1.jar

2014-11-17 11:30:49,507 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.cloudera.htrace.Trace, using jar /BIDATA/hadoop/hbase/lib/htrace-core-2.04.jar 2014-11-17 11:30:49,510 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,510 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Text, using jar /BIDATA/hadoop/hbase/lib/hadoop-common-2.2.0.jar

2014-11-17 11:30:49,511 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.2.0.jar

2014-11-17 11:30:49,512 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,513 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.KeyValue, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,513 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat, using jar /BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2.jar

2014-11-17 11:30:49,514 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.2.0.jar

2014-11-17 11:30:49,515 INFO [main] mapreduce.HFileOutputFormat2: Incremental table TM_CORP_SNMBR_TOP35_M output configured.

2014-11-17 11:30:49,515 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.HConstants, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,516 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /BIDATA/hadoop/hbase/lib/hbase-protocol-0.98.3-hadoop2.jar

2014-11-17 11:30:49,517 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.client.Put, using jar /BIDATA/hadoop/hbase/lib/hbase-client-0.98.3-hadoop2.jar

2014-11-17 11:30:49,518 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.CompatibilityFactory, using jar

/BIDATA/hadoop/hbase/lib/hbase-hadoop-compat-0.98.3-hadoop2.jar

2014-11-17 11:30:49,518 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2.jar

2014-11-17 11:30:49,519 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /BIDATA/hadoop/hbase/lib/zookeeper-3.4.6.jar

2014-11-17 11:30:49,520 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.jboss.netty.channel.ChannelFactory, using jar /BIDATA/hadoop/hbase/lib/netty-3.6.6.Final.jar

2014-11-17 11:30:49,520 DEBUG [main] mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /BIDATA/hadoop/hbase/lib/protobuf-java-2.5.0.jar

2014-11-17 11:30:49,521 DEBUG [main] mapreduce.TableMapReduceUtil: For class com.google.common.collect.Lists, using jar /BIDATA/hadoop/hbase/lib/guava-12.0.1.jar

2014-11-17 11:30:49,522 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.cloudera.htrace.Trace, using jar /BIDATA/hadoop/hbase/lib/htrace-core-2.04.jar

2014-11-17 11:30:49,523 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,523 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Text, using jar /BIDATA/hadoop/hbase/lib/hadoop-common-2.2.0.jar

2014-11-17 11:30:49,524 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.2.0.jar

2014-11-17 11:30:49,525 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,525 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.KeyValue, using jar /BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:49,526 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat, using jar /BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2.jar

2014-11-17 11:30:49,527 DEBUG [main] mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.2.0.jar

2014-11-17 11:30:49,527 DEBUG [main] mapreduce.TableMapReduceUtil: For class com.google.common.base.Function, using jar /BIDATA/hadoop/hbase/lib/guava-12.0.1.jar

2014-11-17 11:30:49,559 INFO [main] Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id

2014-11-17 11:30:49,559 INFO [main] jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=

2014-11-17 11:30:49,815 INFO [main] input.FileInputFormat: Total input paths to process : 1

2014-11-17 11:30:49,840 INFO [main] mapreduce.JobSubmitter: number of splits:1 2014-11-17 11:30:49,853 INFO [main] Configuration.deprecation: dfs.df.interval is deprecated. Instead, use fs.df.interval

2014-11-17 11:30:49,853 INFO [main] Configuration.deprecation:

mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files 2014-11-17 11:30:49,854 INFO [main] Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name

2014-11-17 11:30:49,854 INFO [main] Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar

2014-11-17 11:30:49,854 INFO [main] Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes

2014-11-17 11:30:49,854 INFO [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS

2014-11-17 11:30:49,854 INFO [main] Configuration.deprecation:

mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files

2014-11-17 11:30:49,855 INFO [main] Configuration.deprecation:

mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 2014-11-17 11:30:49,855 INFO [main] Configuration.deprecation: mapreduce.partitioner.class is deprecated. Instead, use mapreduce.job.partitioner.class

2014-11-17 11:30:49,855 INFO [main] Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class

2014-11-17 11:30:49,856 INFO [main] Configuration.deprecation: mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class

2014-11-17 11:30:49,856 INFO [main] Configuration.deprecation:

mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class 2014-11-17 11:30:49,857 INFO [main] Configuration.deprecation:

mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class 2014-11-17 11:30:49,857 INFO [main] Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class

2014-11-17 11:30:49,857 INFO [main] Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir

2014-11-17 11:30:49,857 INFO [main] Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 2014-11-17 11:30:49,857 INFO [main] Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class

2014-11-17 11:30:49,857 INFO [main] Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps

2014-11-17 11:30:49,858 INFO [main] Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps

2014-11-17 11:30:49,858 INFO [main] Configuration.deprecation:

mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 2014-11-17 11:30:49,858 INFO [main] Configuration.deprecation: topology.script.number.args is deprecated. Instead, use net.topology.script.number.args

2014-11-17 11:30:49,858 INFO [main] Configuration.deprecation: dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode

2014-11-17 11:30:49,858 INFO [main] Configuration.deprecation: topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl

2014-11-17 11:30:49,858 INFO [main] Configuration.deprecation:

io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2014-11-17 11:30:49,858 INFO [main] Configuration.deprecation: mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class

2014-11-17 11:30:49,859 INFO [main] Configuration.deprecation:

mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir

2014-11-17 11:30:49,986 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_local1718376827_0001

2014-11-17 11:30:50,011 WARN [main] conf.Configuration: file:/tmp/hadoop-hadoop/mapred/staging/hadoop1718376827/.staging/job_local1718376827_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.

2014-11-17 11:30:50,016 WARN [main] conf.Configuration: file:/tmp/hadoop-hadoop/mapred/staging/hadoop1718376827/.staging/job_local1718376827_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.

2014-11-17 11:30:50,126 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050057/protobuf-java-2.5.0.jar <- /BIDATA/hadoop/protobuf-java-2.5.0.jar

2014-11-17 11:30:50,131 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/protobuf-java-2.5.0.jar as

file:/tmp/hadoop-hadoop/mapred/local/1416195050057/protobuf-java-2.5.0.jar 2014-11-17 11:30:50,156 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050058/zookeeper-3.4.6.jar <- /BIDATA/hadoop/zookeeper-3.4.6.jar

2014-11-17 11:30:50,157 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/zookeeper-3.4.6.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050058/zookeeper-3.4.6.jar

2014-11-17 11:30:50,180 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050059/hbase-server-0.98.3-hadoop2.jar <- /BIDATA/hadoop/hbase-server-0.98.3-hadoop2.jar 2014-11-17 11:30:50,181 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050059/hbase-server-0.98.3-hadoop2.jar

2014-11-17 11:30:50,181 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050060/hbase-hadoop-compat-0.98.3-hadoop2.jar <- /BIDATA/hadoop/hbase-hadoop-compat-0.98.3-hadoop2.jar

2014-11-17 11:30:50,183 INFO [main] mapred.LocalDistributedCacheManager:

Localized file:/BIDATA/hadoop/hbase/lib/hbase-hadoop-compat-0.98.3-hadoop2.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050060/hbase-hadoop-compat-0.98.3-hadoop2.jar

2014-11-17 11:30:50,185 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050061/hbase-protocol-0.98.3-hadoop2.jar <- /BIDATA/hadoop/hbase-protocol-0.98.3-hadoop2.jar 2014-11-17 11:30:50,186 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/hbase-protocol-0.98.3-hadoop2.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050061/hbase-protocol-0.98.3-hadoop2.jar

2014-11-17 11:30:50,186 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050062/hbase-

common-0.98.3-hadoop2.jar <- /BIDATA/hadoop/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:50,187 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050062/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:50,187 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050063/netty-3.6.6.Final.jar <- /BIDATA/hadoop/netty-3.6.6.Final.jar

2014-11-17 11:30:50,188 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/netty-3.6.6.Final.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050063/netty-3.6.6.Final.jar

2014-11-17 11:30:50,209 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050064/guava-12.0.1.jar <- /BIDATA/hadoop/guava-12.0.1.jar

2014-11-17 11:30:50,211 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/guava-12.0.1.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050064/guava-12.0.1.jar

2014-11-17 11:30:50,236 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050065/hadoop-common-2.2.0.jar <- /BIDATA/hadoop/hadoop-common-2.2.0.jar

2014-11-17 11:30:50,237 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/hadoop-common-2.2.0.jar as

file:/tmp/hadoop-hadoop/mapred/local/1416195050065/hadoop-common-2.2.0.jar 2014-11-17 11:30:50,237 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050066/hbase-client-0.98.3-hadoop2.jar <- /BIDATA/hadoop/hbase-client-0.98.3-hadoop2.jar 2014-11-17 11:30:50,239 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/hbase-client-0.98.3-hadoop2.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050066/hbase-client-0.98.3-hadoop2.jar

2014-11-17 11:30:50,239 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050067/hadoop-mapreduce-client-core-2.2.0.jar <- /BIDATA/hadoop/hadoop-mapreduce-client-core-2.2.0.jar

2014-11-17 11:30:50,240 INFO [main] mapred.LocalDistributedCacheManager:

Localized file:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.2.0.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050067/hadoop-mapreduce-client-core-2.2.0.jar

2014-11-17 11:30:50,240 INFO [main] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-hadoop/mapred/local/1416195050068/htrace-core-2.04.jar <- /BIDATA/hadoop/htrace-core-2.04.jar

2014-11-17 11:30:50,241 INFO [main] mapred.LocalDistributedCacheManager: Localized file:/BIDATA/hadoop/hbase/lib/htrace-core-2.04.jar as file:/tmp/hadoop-hadoop/mapred/local/1416195050068/htrace-core-2.04.jar

2014-11-17 11:30:50,250 INFO [main] Configuration.deprecation:

mapred.cache.localFiles is deprecated. Instead, use mapreduce.job.cache.local.files

2014-11-17 11:30:50,294 WARN [main] conf.Configuration: file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1718376827_0001/job_local1718376827_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.

2014-11-17 11:30:50,298 WARN [main] conf.Configuration: file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1718376827_0001/job_local1718376827_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.

2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050057/protobuf-java-2.5.0.jar 2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050058/zookeeper-3.4.6.jar 2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050059/hbase-server-0.98.3-hadoop2.jar

2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050060/hbase-hadoop-compat-0.98.3-hadoop2.jar

2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050061/hbase-protocol-0.98.3-hadoop2.jar

2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050062/hbase-common-0.98.3-hadoop2.jar

2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050063/netty-3.6.6.Final.jar 2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050064/guava-12.0.1.jar 2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager:

file:/tmp/hadoop-hadoop/mapred/local/1416195050065/hadoop-common-2.2.0.jar 2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050066/hbase-client-0.98.3-hadoop2.jar

2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager:

file:/tmp/hadoop-hadoop/mapred/local/1416195050067/hadoop-mapreduce-client-core-2.2.0.jar

2014-11-17 11:30:50,299 INFO [main] mapred.LocalDistributedCacheManager: file:/tmp/hadoop-hadoop/mapred/local/1416195050068/htrace-core-2.04.jar 2014-11-17 11:30:50,303 INFO [main] mapreduce.Job: The url to track the job: http://localhost:8080/

2014-11-17 11:30:50,304 INFO [main] mapreduce.Job: Running job: job_local1718376827_0001

2014-11-17 11:30:50,305 INFO [Thread-32] mapred.LocalJobRunner: OutputCommitter set in config null

2014-11-17 11:30:50,314 INFO [Thread-32] mapred.LocalJobRunner:

OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter

2014-11-17 11:30:50,338 INFO [Thread-32] mapred.LocalJobRunner: Waiting for map tasks

2014-11-17 11:30:50,338 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner: Starting task:

attempt_local1718376827_0001_m_000000_0

2014-11-17 11:30:50,365 INFO [LocalJobRunner Map Task Executor #0] mapred.Task: Using ResourceCalculatorProcessTree : [ ]

2014-11-17 11:30:50,368 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: Processing split:

file:/BIDATA/hadoop/jyy/input/TM_CORP_SNMBR_TOP35_M.txt:0+84 2014-11-17 11:30:50,377 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: Map output collector class =

org.apache.hadoop.mapred.MapTask$MapOutputBuffer

2014-11-17 11:30:50,405 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)

2014-11-17 11:30:50,405 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: mapreduce.task.io.sort.mb: 100

2014-11-17 11:30:50,405 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: soft limit at 83886080

2014-11-17 11:30:50,405 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: bufstart = 0; bufvoid = 104857600

2014-11-17 11:30:50,405 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: kvstart = 26214396; length = 6553600

2014-11-17 11:30:50,428 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner:

2014-11-17 11:30:50,428 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: Starting flush of map output

2014-11-17 11:30:50,428 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: Spilling map output

2014-11-17 11:30:50,428 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: bufstart = 0; bufend = 104; bufvoid = 104857600 2014-11-17 11:30:50,428 INFO [LocalJobRunner Map Task Executor #0]

mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600

2014-11-17 11:30:50,434 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask: Finished spill 0

2014-11-17 11:30:50,436 INFO [LocalJobRunner Map Task Executor #0] mapred.Task: Task:attempt_local1718376827_0001_m_000000_0 is done. And is in the process of committing

2014-11-17 11:30:50,442 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner: map

2014-11-17 11:30:50,442 INFO [LocalJobRunner Map Task Executor #0] mapred.Task: Task 'attempt_local1718376827_0001_m_000000_0' done.

2014-11-17 11:30:50,442 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner: Finishing task:

attempt_local1718376827_0001_m_000000_0

2014-11-17 11:30:50,442 INFO [Thread-32] mapred.LocalJobRunner: Map task executor complete.

2014-11-17 11:30:50,451 INFO [Thread-32] mapred.Task: Using ResourceCalculatorProcessTree : [ ]

2014-11-17 11:30:50,455 INFO [Thread-32] mapred.Merger: Merging 1 sorted segments

2014-11-17 11:30:50,459 INFO [Thread-32] mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 98 bytes

2014-11-17 11:30:50,459 INFO [Thread-32] mapred.LocalJobRunner: 2014-11-17 11:30:50,463 INFO [Thread-32] Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords

2014-11-17 11:30:50,477 INFO [Thread-32] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32

2014-11-17 11:30:50,478 INFO [Thread-32] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C

2014-11-17 11:30:50,551 INFO [Thread-32] mapred.Task:

Task:attempt_local1718376827_0001_r_000000_0 is done. And is in the process of committing

2014-11-17 11:30:50,552 INFO [Thread-32] mapred.LocalJobRunner: 2014-11-17 11:30:50,552 INFO [Thread-32] mapred.Task: Task

attempt_local1718376827_0001_r_000000_0 is allowed to commit now

2014-11-17 11:30:50,553 INFO [Thread-32] output.FileOutputCommitter: Saved output of task 'attempt_local1718376827_0001_r_000000_0' to

file:/BIDATA/hadoop/jyy/output/_temporary/0/task_local1718376827_0001_r_000000

2014-11-17 11:30:50,554 INFO [Thread-32] mapred.LocalJobRunner: Read 9 entries of class java.util.TreeSet(984) > reduce

2014-11-17 11:30:50,554 INFO [Thread-32] mapred.Task: Task 'attempt_local1718376827_0001_r_000000_0' done.

2014-11-17 11:30:51,306 INFO [main] mapreduce.Job: Job job_local1718376827_0001 running in uber mode : false

2014-11-17 11:30:51,308 INFO [main] mapreduce.Job: map 100% reduce 100% 2014-11-17 11:30:51,310 INFO [main] mapreduce.Job: Job job_local1718376827_0001 completed successfully

2014-11-17 11:30:51,328 INFO [main] mapreduce.Job: Counters: 28 File System Counters

FILE: Number of bytes read=40212818 FILE: Number of bytes written=41040346 FILE: Number of read operations=0

FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=2 Map output records=2 Map output bytes=104

Map output materialized bytes=114 Input split bytes=120

Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=0 Reduce input records=2 Reduce output records=18 Spilled Records=4 Shuffled Maps =0 Failed Shuffles=0

Merged Map outputs=0 GC time elapsed (ms)=0 CPU time spent (ms)=0

Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0

Total committed heap usage (bytes)=505151488 ImportTsv

Bad Lines=0

File Input Format Counters Bytes Read=84

File Output Format Counters Bytes Written=1890

[9]使用completebulkload加载数据

hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /BIDATA/hadoop/jyy/output TM_CORP_SNMBR_TOP35_M

hbase(main):002:0> [hadoop@bogon ~]$ hbase

org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /BIDATA/hadoop/jyy/output TM_CORP_SNMBR_TOP35_M

2014-11-17 11:42:10,930 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client

environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=bogon

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_60

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/BIDATA/hadoop/jdk7/jre

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client

environment:java.class.path=/BIDATA/hadoop/hbase/conf:/BIDATA/hadoop/jdk7/lib/tools.jar:/BIDATA/hadoop/hbase:/BIDATA/hadoop/hbase/lib/activation-1.1.jar:/BIDATA/hadoop/hbase/lib/aopalliance-1.0.jar:/BIDATA/hadoop/hbase/lib/asm-3.1.jar:/BIDATA/hadoop/hbase/lib/avro-

1.7.4.jar:/BIDATA/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/BIDATA/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/BIDATA/hadoop/hbase/lib/commons-cli-1.2.jar:/BIDATA/hadoop/hbase/lib/commons-codec-1.7.jar:/BIDATA/hadoop/hbase/lib/commons-collections-3.2.1.jar:/BIDATA/hadoop/hbase/lib/commons-compress-1.4.1.jar:/BIDATA/hadoop/hbase/lib/commons-configuration-1.6.jar:/BIDATA/hadoop/hbase/lib/commons-daemon-1.0.13.jar:/BIDATA/hadoop/hbase/lib/commons-digester-1.8.jar:/BIDATA/hadoop/hbase/lib/commons-el-1.0.jar:/BIDATA/hadoop/hbase/lib/commons-httpclient-3.1.jar:/BIDATA/hadoop/hbase/lib/commons-io-2.4.jar:/BIDATA/hadoop/hbase/lib/commons-lang-2.6.jar:/BIDATA/hadoop/hbase/lib/commons-logging-1.1.1.jar:/BIDATA/hadoop/hbase/lib/commons-math-2.1.jar:/BIDATA/hadoop/hbase/lib/commons-net-3.1.jar:/BIDATA/hadoop/hbase/lib/findbugs-annotations-1.3.9-1.jar:/BIDATA/hadoop/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/BIDATA/hadoop/hbase/lib/grizzly-framework-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-server-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-servlet-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-rcm-2.1.2.jar:/BIDATA/hadoop/hbase/lib/guava-12.0.1.jar:/BIDATA/hadoop/hbase/lib/guice-3.0.jar:/BIDATA/hadoop/hbase/lib/guice-servlet-3.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-annotations-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-auth-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-client-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-common-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-hdfs-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-hdfs-2.2.0-tests.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-app-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-common-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/BIDATA/hadoop/hbase/lib/hadoop-mapreduce-client-shuffle-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-api-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-client-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-common-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-server-common-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-yarn-server-nodemanager-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hamcrest-core-1.3.jar:/BIDATA/hadoop/hbase/lib/hbase-client-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-common-0.98.3-hadoop2-

tests.jar:/BIDATA/hadoop/hbase/lib/hbase-examples-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-hadoop2-compat-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-hadoop-compat-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-it-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-it-0.98.3-hadoop2-tests.jar:/BIDATA/hadoop/hbase/lib/hbase-prefix-tree-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-protocol-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-server-0.98.3-hadoop2-tests.jar:/BIDATA/hadoop/hbase/lib/hbase-shell-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-testing-util-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-thrift-0.98.3-hadoop2.jar:/BIDATA/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/BIDATA/hadoop/hbase/lib/htrace-core-2.04.jar:/BIDATA/hadoop/hbase/lib/httpclient-4.1.3.jar:/BIDATA/hadoop/hbase/lib/httpcore-4.1.3.jar:/BIDATA/hadoop/hbase/lib/jackson-core-asl-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jackson-jaxrs-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jackson-mapper-asl-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jackson-xc-1.8.8.jar:/BIDATA/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/BIDATA/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/BIDATA/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/BIDATA/hadoop/hbase/lib/javax.inject-1.jar:/BIDATA/hadoop/hbase/lib/javax.servlet-3.1.jar:/BIDATA/hadoop/hbase/lib/javax.servlet-api-3.0.1.jar:/BIDATA/hadoop/hbase/lib/jaxb-api-2.2.2.jar:/BIDATA/hadoop/hbase/lib/jaxb-impl-2.2.3-1.jar:/BIDATA/hadoop/hbase/lib/jersey-client-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-core-1.8.jar:/BIDATA/hadoop/hbase/lib/jersey-grizzly2-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-guice-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-json-1.8.jar:/BIDATA/hadoop/hbase/lib/jersey-server-1.8.jar:/BIDATA/hadoop/hbase/lib/jersey-test-framework-core-1.9.jar:/BIDATA/hadoop/hbase/lib/jersey-test-framework-grizzly2-1.9.jar:/BIDATA/hadoop/hbase/lib/jets3t-0.6.1.jar:/BIDATA/hadoop/hbase/lib/jettison-1.3.1.jar:/BIDATA/hadoop/hbase/lib/jetty-6.1.26.jar:/BIDATA/hadoop/hbase/lib/jetty-sslengine-6.1.26.jar:/BIDATA/hadoop/hbase/lib/jetty-util-6.1.26.jar:/BIDATA/hadoop/hbase/lib/jruby-complete-1.6.8.jar:/BIDATA/hadoop/hbase/lib/jsch-0.1.42.jar:/BIDATA/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/BIDATA/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/BIDATA/hadoop/hbase/lib/jsr305-1.3.9.jar:/BIDATA/hadoop/hbase/lib/junit-4.11.jar:/BIDATA/hadoop/hbase/lib/libthrift-

0.9.0.jar:/BIDATA/hadoop/hbase/lib/log4j-1.2.17.jar:/BIDATA/hadoop/hbase/lib/management-api-3.0.0-b012.jar:/BIDATA/hadoop/hbase/lib/metrics-core-2.1.2.jar:/BIDATA/hadoop/hbase/lib/netty-3.6.6.Final.jar:/BIDATA/hadoop/hbase/lib/paranamer-2.3.jar:/BIDATA/hadoop/hbase/lib/protobuf-java-2.5.0.jar:/BIDATA/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/BIDATA/hadoop/hbase/lib/slf4j-api-1.6.4.jar:/BIDATA/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar:/BIDATA/hadoop/hbase/lib/snappy-java-1.0.4.1.jar:/BIDATA/hadoop/hbase/lib/xmlenc-0.52.jar:/BIDATA/hadoop/hbase/lib/xz-1.0.jar:/BIDATA/hadoop/hbase/lib/zookeeper-3.4.6.jar:

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client

environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-71.el6.x86_64

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop

2014-11-17 11:42:11,033 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/BIDATA/hadoop

2014-11-17 11:42:11,034 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/BIDATA/hadoop

2014-11-17 11:42:11,034 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000

watcher=hconnection-0xf1afec5, quorum=localhost:2181, baseZNode=/hbase 2014-11-17 11:42:11,054 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf1afec5 connecting to ZooKeeper ensemble=localhost:2181

2014-11-17 11:42:11,056 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Opening socket connection to server

localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

2014-11-17 11:42:11,061 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session

2014-11-17 11:42:11,067 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Session establishment complete on server

localhost.localdomain/127.0.0.1:2181, sessionid = 0x1495590492b005b, negotiated timeout = 40000

2014-11-17 11:42:11,366 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000

watcher=catalogtracker-on-hconnection-0xf1afec5, quorum=localhost:2181, baseZNode=/hbase

2014-11-17 11:42:11,366 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0xf1afec5 connecting to ZooKeeper ensemble=localhost:2181

2014-11-17 11:42:11,367 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Opening socket connection to server

localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

2014-11-17 11:42:11,367 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session

2014-11-17 11:42:11,368 DEBUG [main] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@5cc5450c

2014-11-17 11:42:11,368 INFO [main-SendThread(localhost.localdomain:2181)] zookeeper.ClientCnxn: Session establishment complete on server

localhost.localdomain/127.0.0.1:2181, sessionid = 0x1495590492b005c, negotiated timeout = 40000

2014-11-17 11:42:11,396 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

2014-11-17 11:42:11,861 DEBUG [main] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@5cc5450c 2014-11-17 11:42:11,862 INFO [main] zookeeper.ZooKeeper: Session: 0x1495590492b005c closed

2014-11-17 11:42:11,862 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down

2014-11-17 11:42:11,878 WARN [main] mapreduce.LoadIncrementalHFiles: Skipping non-directory file:/BIDATA/hadoop/jyy/output/_SUCCESS

2014-11-17 11:42:11,957 INFO [LoadIncrementalHFiles-0] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32

2014-11-17 11:42:11,959 INFO [LoadIncrementalHFiles-0] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C 2014-11-17 11:42:12,034 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles: Trying to load

hfile=file:/BIDATA/hadoop/jyy/output/CF/bd37bf2de0864a19b2e5e112ac48e2b0 first=201410 last=201411

2014-11-17 11:42:12,067 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles: Going to connect to server

region=TM_CORP_SNMBR_TOP35_M,,1416193283144.73f952ca0bc2d9e86647e99490825acf., hostname=bogon,59960,1414479760828, seqNum=1 for row with hfile group

[{[B@bfcd37c,file:/BIDATA/hadoop/jyy/output/CF/bd37bf2de0864a19b2e5e112ac48e2b0}]

==========================================

Hbase 表操作-DML

2015年5月16日 10:49

[1]查看hbase的所有表:list

hbase(main):002:0> list 'TM_CORP_SNMBR_TOP35_M'

TABLE

TM_CORP_SNMBR_TOP35_M 1 row(s) in 0.1010 seconds

=> [\hbase(main):003:0> list

TABLE HB_TR_AREA HB_TR_AREA_T HB_TR_AREA_ZYH TM_CORP_SNMBR_TOP35_M TestTable hb_dzl hbase_java hbase_lqm_java lqm_hbase_1 member member1 test_b 12 row(s) in 0.0280 seconds

=> [\

\\

[2]获得表的描述:describe 'TM_CORP_SNMBR_TOP35_M'

hbase(main):001:0> describe 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 14:54:28,428 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable

DESCRIPTION ENABLED 'TM_CORP_SNMBR_TOP35_M', {NAME => 'CF', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SC true

OPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CEL

LS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 1 row(s) in 1.0880 seconds

[3]获取表的数据量:count 'TM_CORP_SNMBR_TOP35_M'

hbase(main):001:0> count 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 14:57:35,665 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 2 row(s) in 0.4370 seconds => 2

[4]全表数据查询:scan 'TM_CORP_SNMBR_TOP35_M'

hbase(main):001:0> scan 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 14:18:18,073 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable

ROW COLUMN+CELL 201410 column=CF:BIND_PNTRN_RT,

timestamp=1416195048037, value=1.1

201410 column=CF:DATA_TYP_CD, timestamp=1416195048037, value=1

201410 column=CF:G4_PNTRN_RT, timestamp=1416195048037, value=1.1

201410 column=CF:LOC_LVL1_CD, timestamp=1416195048037, value=GZ01

201410 column=CF:RETN_ARPU, timestamp=1416195048037, value=1.1

201410 column=CF:RETN_USR_CNT,

timestamp=1416195048037, value=200

201410 column=CF:SNAP_ARPU, timestamp=1416195048037, value=1.1

201410 column=CF:SNAP_USR_CNT,

timestamp=1416195048037, value=100 201410 column=CF:SPAY_PNTRN_RT,

timestamp=1416195048037, value=1.1 201411 column=CF:BIND_PNTRN_RT,

timestamp=1416195048037, value=1.1

201411 column=CF:DATA_TYP_CD, timestamp=1416195048037, value=1

201411 column=CF:G4_PNTRN_RT, timestamp=1416195048037, value=1.1

201411 column=CF:LOC_LVL1_CD, timestamp=1416195048037, value=GZ02

201411 column=CF:RETN_ARPU, timestamp=1416195048037, value=1.1

201411 column=CF:RETN_USR_CNT,

timestamp=1416195048037, value=200

201411 column=CF:SNAP_ARPU, timestamp=1416195048037, value=1.1

201411 column=CF:SNAP_USR_CNT,

timestamp=1416195048037, value=100 201411 column=CF:SPAY_PNTRN_RT,

timestamp=1416195048037, value=1.1 2 row(s) in 0.4720 seconds

[5]限制只取1条数据:scan 'TM_CORP_SNMBR_TOP35_M',LIMIT =>1

hbase(main):002:0> scan 'TM_CORP_SNMBR_TOP35_M',LIMIT =>1

ROW COLUMN+CELL 201410 column=CF:BIND_PNTRN_RT,

timestamp=1416195048037, value=1.1

201410 column=CF:DATA_TYP_CD, timestamp=1416195048037, value=1

201410 column=CF:G4_PNTRN_RT, timestamp=1416195048037, value=1.1

201410 column=CF:LOC_LVL1_CD, timestamp=1416195048037, value=GZ01

201410 column=CF:RETN_ARPU, timestamp=1416195048037, value=1.1

201410 column=CF:RETN_USR_CNT,

timestamp=1416195048037, value=200

201410 column=CF:SNAP_ARPU, timestamp=1416195048037, value=1.1

201410 column=CF:SNAP_USR_CNT,

timestamp=1416195048037, value=100 201410 column=CF:SPAY_PNTRN_RT,

timestamp=1416195048037, value=1.1 1 row(s) in 0.0290 seconds

[6]限制某个列族只取1条数据:scan

'TM_CORP_SNMBR_TOP35_M',COLUMNS=>'CF',LIMIT =>1

hbase(main):001:0> scan 'TM_CORP_SNMBR_TOP35_M',COLUMNS=>'CF',LIMIT =>1 2014-11-17 15:02:12,618 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable

ROW COLUMN+CELL 201410 column=CF:BIND_PNTRN_RT,

timestamp=1416195048037, value=1.1

201410 column=CF:DATA_TYP_CD, timestamp=1416195048037, value=1

201410 column=CF:G4_PNTRN_RT, timestamp=1416195048037, value=1.1

201410 column=CF:LOC_LVL1_CD, timestamp=1416195048037, value=GZ01

201410 column=CF:RETN_ARPU, timestamp=1416195048037, value=1.1

201410 column=CF:RETN_USR_CNT,

timestamp=1416195048037, value=200

201410 column=CF:SNAP_ARPU, timestamp=1416195048037, value=1.1

201410 column=CF:SNAP_USR_CNT,

timestamp=1416195048037, value=100 201410 column=CF:SPAY_PNTRN_RT,

timestamp=1416195048037, value=1.1 1 row(s) in 0.4600 seconds

[7]限制只取某个字段:scan

'TM_CORP_SNMBR_TOP35_M',{COLUMNS=>'CF:LOC_LVL1_CD'}

hbase(main):001:0> scan

'TM_CORP_SNMBR_TOP35_M',{COLUMNS=>'CF:LOC_LVL1_CD'}

2014-11-17 15:04:51,044 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable

ROW COLUMN+CELL 201410 column=CF:LOC_LVL1_CD, timestamp=1416195048037, value=GZ01

201411 column=CF:LOC_LVL1_CD, timestamp=1416195048037, value=GZ02 2 row(s) in 0.4410 seconds

[8]增加一条记录:put

'TM_CORP_SNMBR_TOP35_M','201409','CF:LOC_LVL1_CD','GZ03'

hbase(main):001:0> put

'TM_CORP_SNMBR_TOP35_M','201409','CF:LOC_LVL1_CD','GZ03'

2014-11-17 15:06:50,736 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 0.4900 seconds

[9]查询主键等于某个值的记录:get 'TM_CORP_SNMBR_TOP35_M','201410' hbase(main):001:0> get 'TM_CORP_SNMBR_TOP35_M','201410'

2014-11-17 15:16:05,821 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable

COLUMN CELL CF:BIND_PNTRN_RT timestamp=1416195048037, value=1.1 CF:DATA_TYP_CD timestamp=1416195048037, value=1 CF:G4_PNTRN_RT timestamp=1416195048037, value=1.1 CF:LOC_LVL1_CD timestamp=1416195048037, value=GZ01 CF:RETN_ARPU timestamp=1416195048037, value=1.1 CF:RETN_USR_CNT timestamp=1416195048037, value=200 CF:SNAP_ARPU timestamp=1416195048037, value=1.1 CF:SNAP_USR_CNT timestamp=1416195048037, value=100 CF:SPAY_PNTRN_RT timestamp=1416195048037, value=1.1 9 row(s) in 0.4460 seconds

[10]查询主键等于某个值的某个列族的记录:get 'TM_CORP_SNMBR_TOP35_M','201410','CF'

hbase(main):002:0> get 'TM_CORP_SNMBR_TOP35_M','201410','CF'

COLUMN CELL CF:BIND_PNTRN_RT timestamp=1416195048037, value=1.1 CF:DATA_TYP_CD timestamp=1416195048037, value=1 CF:G4_PNTRN_RT timestamp=1416195048037, value=1.1 CF:LOC_LVL1_CD timestamp=1416195048037, value=GZ01 CF:RETN_ARPU timestamp=1416195048037, value=1.1 CF:RETN_USR_CNT timestamp=1416195048037, value=200 CF:SNAP_ARPU timestamp=1416195048037, value=1.1 CF:SNAP_USR_CNT timestamp=1416195048037, value=100 CF:SPAY_PNTRN_RT timestamp=1416195048037, value=1.1 9 row(s) in 0.0250 seconds

[11]查询主键等于某个值的某个列族的某个字段记录:get 'TM_CORP_SNMBR_TOP35_M','201410','CF:LOC_LVL1_CD'

hbase(main):003:0> get 'TM_CORP_SNMBR_TOP35_M','201410','CF:LOC_LVL1_CD'

COLUMN CELL CF:LOC_LVL1_CD timestamp=1416195048037, value=GZ01 1 row(s) in 0.0080 seconds

[12]修改字段值:put

'TM_CORP_SNMBR_TOP35_M','201410','CF:BIND_PNTRN_RT','2.01' hbase(main):003:0> put

'TM_CORP_SNMBR_TOP35_M','201410','CF:BIND_PNTRN_RT','2.01' 0 row(s) in 0.0600 seconds

hbase(main):004:0> get

'TM_CORP_SNMBR_TOP35_M','201410','CF:BIND_PNTRN_RT'

COLUMN CELL CF:BIND_PNTRN_RT timestamp=1416208760428, value=2.01 1 row(s) in 0.0080 seconds

[13]删除主键为某个固定值的某个字段记录:delete

'TM_CORP_SNMBR_TOP35_M','201410','CF:BIND_PNTRN_RT' hbase(main):002:0> delete

'TM_CORP_SNMBR_TOP35_M','201410','CF:BIND_PNTRN_RT' 0 row(s) in 0.0590 seconds

[14]删除主键为某个固定值的所有字段记录:deleteall 'TM_CORP_SNMBR_TOP35_M','201410'

hbase(main):002:0> deleteall 'TM_CORP_SNMBR_TOP35_M','201410' 0 row(s) in 0.0460 seconds

hbase(main):003:0> scan 'TM_CORP_SNMBR_TOP35_M'

ROW COLUMN+CELL 201409 column=CF:LOC_LVL1_CD, timestamp=1416208011582, value=GZ03

201411 column=CF:BIND_PNTRN_RT,

timestamp=1416195048037, value=1.1

201411 column=CF:DATA_TYP_CD, timestamp=1416195048037, value=1

201411 column=CF:G4_PNTRN_RT, timestamp=1416195048037, value=1.1

201411 column=CF:LOC_LVL1_CD, timestamp=1416195048037, value=GZ02

201411 column=CF:RETN_ARPU, timestamp=1416195048037, value=1.1

201411 column=CF:RETN_USR_CNT,

timestamp=1416195048037, value=200

201411 column=CF:SNAP_ARPU, timestamp=1416195048037, value=1.1

201411 column=CF:SNAP_USR_CNT,

timestamp=1416195048037, value=100 201411 column=CF:SPAY_PNTRN_RT,

timestamp=1416195048037, value=1.1 2 row(s) in 0.0310 seconds

[15]删除全表记录:truncate 'TM_CORP_SNMBR_TOP35_M'

hbase(main):001:0> truncate 'TM_CORP_SNMBR_TOP35_M'

Truncating 'TM_CORP_SNMBR_TOP35_M' table (it may take a while):

2014-11-17 15:52:01,190 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable - Disabling table... - Dropping table... - Creating table...

0 row(s) in 2.5390 seconds

hbase(main):002:0> scan 'TM_CORP_SNMBR_TOP35_M'

ROW COLUMN+CELL 0 row(s) in 0.0430 seconds

[16]查询表是否存在:exists 'TM_CORP_SNMBR_TOP35_M'

hbase(main):001:0> exists 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 15:24:09,838 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable

Table TM_CORP_SNMBR_TOP35_M does exist 0 row(s) in 1.0640 seconds

========================================== 【帮助文档】 [scan]

hbase> scan 'hbase:meta'

hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}

hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'} hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}

hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]} hbase> scan 't1', {REVERSED => true}

hbase> scan 't1', {FILTER => \

(QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( 123, 456))\ hbase> scan 't1', {FILTER =>

org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)} For setting the Operation Attributes

hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}} hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}

For experts, there is an additional option -- CACHE_BLOCKS -- which switches block caching for the scanner on (true) or off (false). By default it is enabled. Examples:

hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false} [get]

hbase> get 'ns1:t1', 'r1' hbase> get 't1', 'r1'

hbase> get 't1', 'r1', {TIMERANGE => [ts1, ts2]} hbase> get 't1', 'r1', {COLUMN => 'c1'}

hbase> get 't1', 'r1', {COLUMN => ['c1', 'c2', 'c3']}

hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}

hbase> get 't1', 'r1', {COLUMN => 'c1', TIMERANGE => [ts1, ts2], VERSIONS => 4} hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => 4} hbase> get 't1', 'r1', {FILTER => \ hbase> get 't1', 'r1', 'c1' hbase> get 't1', 'r1', 'c1', 'c2' hbase> get 't1', 'r1', ['c1', 'c2']

hbsase> get 't1','r1', {COLUMN => 'c1', ATTRIBUTES => {'mykey'=>'myvalue'}} hbsase> get 't1','r1', {COLUMN => 'c1', AUTHORIZATIONS => ['PRIVATE','SECRET']}

hbase> t.get 'r1'

hbase> t.get 'r1', {TIMERANGE => [ts1, ts2]} hbase> t.get 'r1', {COLUMN => 'c1'}

hbase> t.get 'r1', {COLUMN => ['c1', 'c2', 'c3']}

hbase> t.get 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}

hbase> t.get 'r1', {COLUMN => 'c1', TIMERANGE => [ts1, ts2], VERSIONS => 4} hbase> t.get 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => 4} hbase> t.get 'r1', {FILTER => \ hbase> t.get 'r1', 'c1' hbase> t.get 'r1', 'c1', 'c2' hbase> t.get 'r1', ['c1', 'c2']

权限控制

2015年5月16日 10:50

修改配置文件

2015年5月16日 11:17

修改Hbase的配置文件,重启hbase服务

【HBase Region】 24 --注意是对应有 Region 的节点 /opt/mapr/hbase/hbase-0.98.9/conf/hbase-site.xml ---对应gpfs目录:/home/cloud_service/hbase/conf

hbase.rpc.engine

org.apache.hadoop.hbase.ipc.SecureRpcEngine

hbase.coprocessor.region.classes

org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController

hbase.superuser

mapr --mapr修改为hadoop的管理用户

【HBase Master】25 --注意是对应有 Master 的节点 /opt/mapr/hbase/hbase-0.98.9/conf/hbase-site.xml

hbase.rpc.engine

org.apache.hadoop.hbase.ipc.SecureRpcEngine

hbase.coprocessor.master.classes

org.apache.hadoop.hbase.security.access.AccessController

hbase.superuser

mapr --mapr修改为hadoop的管理用户

权限控制解析

2015年5月16日 11:19

1. HBase提供的五个权限标识符:RWXCA,分别对应着 READ('R') , WRITE('W') , EXEC('X') , CREATE('C') , ADMIN('A') HBase提供的安全管控级别包括:

Superuser:拥有所有权限的超级管理员用户。通过hbase.superuser参数配置 Global:全局权限可以作用在集群所有的表上。 Namespace:命名空间级。 Table:表级。

ColumnFamily:列簇级权限。 Cell:单元级。

2. 和关系数据库一样,权限的授予和回收都使用grant和revoke,但格式有所不同。grant语法格式:

grant user permissions table column_family column_qualifier 例如,给用户hive分配对表member有读写的权限, 在启用了

hbase.security.authorization之后,默认每个用户只能访问当前的表。而之前创建的member表的属主是HBase,其他用户对其没有访问权限。此时我们通过hive来查找:

# sudo -u hive hbase shell > scan 'member'

ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (table=member, action=READ)在HBase中赋值权限:

# 语法 : grant 参数后面用逗号分隔 # 权限用五个字母表示: \# READ('R'), WRITE('W'), EXEC('X'), CREATE('C'), ADMIN('A') # 例如,给用户‘test'分配对表t1有读写的权限, hbase(main)> grant 'test','RW','t1' 创建表

>create 'hadoop_test12','CF'

> grant 'hive', 'RW', 'member'

0 row(s) in 0.4660 seconds然后通过user_permission来查看权限

> user_permission 'member'

User Table,Family,Qualifier:Permission

Hive member,,: [Permission: actions=READ,WRITE]再在hive中进行查询,此时hive用户已经可以访问。

> scan 'member'

ROW COLUMN+CELL

Elvis column=address:city, timestamp=1425891057211, value=Beijing

3. 对字段赋权给用户:

grant 'test','R','hadoop_test12','CF','LOC_LVL1_CD' -- CF 列簇名

-- LOC_LVL1_CD 字段名

4. 收回权限revoke的语法格式

revoke user table column family column qualifier 收回hive用户在表member上的权限

hbase(main):040:0> revoke 'test' 'TM_CORP_SNMBR_TOP35_M' 'LOC_LVL1_CD' 0 row(s) in 0.1330 seconds

Hbase Java API

2015年5月16日 10:54

文件夹:/BIDATA/hadoop/jyy/hbase/package

编译:javac -classpath /BIDATA/hadoop/hbase/lib/hbase-client-0.98.7-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hbase-common-0.98.7-hadoop2.jar:/BIDATA/hadoop/hbase/lib/hadoop-common-2.2.0.jar -Xlint:deprecation HbaseTest.java

源代码:《HbaseTest.java》

import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import java.io.IOException; import org.apache.hadoop.hbase.MasterNotRunningException; import org.apache.hadoop.hbase.ZooKeeperConnectionException; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.client.Get; import java.util.List; import java.util.ArrayList; import org.apache.hadoop.hbase.client.Delete; public class HbaseTest{ private static Configuration conf = null; static{ Configuration HBASE_CONFIG = new Configuration(); conf = HBaseConfiguration.create(HBASE_CONFIG); } //把表设置成失效状态 public static void hbaseDisableTable(String tbName){ try { HBaseAdmin hBaseAdmin = new HBaseAdmin(conf); hBaseAdmin.disableTable(tbName); } catch(MasterNotRunningException e){ e.printStackTrace(); } catch(ZooKeeperConnectionException e){ e.printStackTrace(); } catch(IOException e){ e.printStackTrace(); } } //删除表 public static void hbaseDleteTable(String tbName){ try { HBaseAdmin hBaseAdmin = new HBaseAdmin(conf); hBaseAdmin.deleteTable(tbName); } catch(MasterNotRunningException e){ e.printStackTrace(); } catch(ZooKeeperConnectionException e){ e.printStackTrace(); } catch(IOException e){ e.printStackTrace(); } } //新建表 public static void hbaseCreateTable(String tbName,String CF){ try { HBaseAdmin hBaseAdmin = new HBaseAdmin(conf); if (hBaseAdmin.tableExists(tbName)){ hBaseAdmin.disableTable(tbName); hBaseAdmin.deleteTable(tbName); } HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf(tbName));//设置新建的表名 tableDescriptor.addFamily(new HColumnDescriptor(CF)); //设置列族名 hBaseAdmin.createTable(tableDescriptor); } catch(MasterNotRunningException e){ e.printStackTrace(); } catch(ZooKeeperConnectionException e){ e.printStackTrace(); } catch(IOException e){ e.printStackTrace(); } } //插入数据(一个字段) public static void hbaseInsertOneColumn(String tbName,String CF,String columnName,String columnVal,String rowVal){ Put put=new Put(rowVal.getBytes()); put.add(CF.getBytes(),columnName.getBytes(),columnVal.getBytes());//列族,列名,字段值 try{ HTable table =new HTable(conf,tbName); table.put(put); table.close(); } catch (IOException e) { e.printStackTrace(); } } //插入数据(一行,所有字段) public static void hbaseInsertOneRow(String tbName,String CF,String columnVal){ String columnList[]=new String[10]; columnList=columnVal.split(\String rowVal=columnList[0]; if (rowVal.equals(\System.out.println(\【Error】Primary Key is Null!\ } else{ Put put=new Put(rowVal.getBytes()); if (!(columnList[1].equals(\getBytes(),columnList[1].getBytes());} if (!(columnList[2].equals(\getBytes(),columnList[2].getBytes());} if (!(columnList[3].equals(\\if (!(columnList[4].equals(\\if (!(columnList[5].equals(\tBytes(),columnList[5].getBytes());} if (!(columnList[6].equals(\tBytes(),columnList[6].getBytes());} if (!(columnList[7].equals(\getBytes(),columnList[7].getBytes());}

if (!(columnList[8].equals(\T\if (!(columnList[9].equals(\T\try{ HTable table =new HTable(conf,tbName); table.put(put); table.close(); } catch (IOException e) { e.printStackTrace(); } } } //全表扫描 public static void hbaseScan(String tbName){ String row; String CF; String columnName; String columnVal; try{ HTable table=new HTable(conf,tbName); ResultScanner rs = table.getScanner(new Scan()); for(Result r:rs){ System.out.println(\row=new String(r.getRow()); System.out.println(\for(Cell cell:r.rawCells()){ CF=new String(CellUtil.cloneFamily(cell)); columnName=new String(CellUtil.cloneQualifier(cell)); columnVal=new String(CellUtil.cloneValue(cell)); System.out.println(CF+\; } } } catch (IOException e) { e.printStackTrace(); } } //查询主键等于某个值的某个列族的记录 public static void hbaseGetByRowID(String tbName,String RowID){ String row; String CF; String columnName; String columnVal; try{ HTable table=new HTable(conf,tbName); Get get=new Get(RowID.getBytes()); Result r = table.get(get); for(Cell cell:r.rawCells()){ CF=new String(CellUtil.cloneFamily(cell)); columnName=new String(CellUtil.cloneQualifier(cell)); columnVal=new String(CellUtil.cloneValue(cell)); System.out.println(CF+\} } catch (IOException e) { e.printStackTrace(); } } //删除主键为某个固定值的所有字段记录 public static void hbaseDeleteByRowID(String tbName,String RowID){ try{ HTable table =new HTable(conf,tbName); Delete del = new Delete(RowID.getBytes()); table.delete(del); } catch (IOException e) { e.printStackTrace(); } } public static void main(String[] args){ if (args.length>1){ String tbname=args[1]; if(args.length==2){ if(args[0].equals(\else if(args[0].equals(\ame);} else if(args[0].equals(\e);} } else if (args.length==3){ if (args[0].equals(\me,args[2]);} else if (args[0].equals(\me,args[2]);} else if (args[0].equals(\(tbname,args[2]);} } else if (args.length==4){ if (args[0].equals(\bname,args[2],args[3]);} } else if (args.length==6){ if (args[0].equals(%umn(tbname,args[2],args[3],args[4],args[5]);} } } } } 单元测试语句: hbase HbaseTest hbaseDisableTable TM_CORP_SNMBR_TOP35_M hbase HbaseTest hbaseDleteTable TM_CORP_SNMBR_TOP35_M hbase HbaseTest hbaseCreateTable TM_CORP_SNMBR_TOP35_M CF hbase HbaseTest hbaseInsertOneColumn TM_CORP_SNMBR_TOP35_M CF LOC_LVL1_CD GZ01 201409 hbase HbaseTest hbaseInsertOneRow TM_CORP_SNMBR_TOP35_M CF 201411,GZ02,1,100,200,1.1,1.1,1.1,1.1,1.1 hbase HbaseTest hbaseInsertOneRow TM_CORP_SNMBR_TOP35_M CF ,GZ02,1,100,200,1.1,1.1,1.1,1.1,1.1 hbase HbaseTest hbaseInsertOneRow TM_CORP_SNMBR_TOP35_M CF 201411,GZ02,1,100,200,,1.1,1.1,1.1,1.1 hbase HbaseTest hbaseScan TM_CORP_SNMBR_TOP35_M hbase HbaseTest hbaseGetByRowID TM_CORP_SNMBR_TOP35_M 201411 hbase HbaseTest hbaseDeleteByRowID TM_CORP_SNMBR_TOP35_M CF 201411

本文来源:https://www.bwwdw.com/article/g8oo.html

Top