立即注册 登录
About云-梭伦科技 返回首页

pig2的个人空间 https://www.aboutyun.com/?61 [收藏] [复制] [分享] [RSS]

留言板

facelist doodle涂鸦板

您需要登录后才可以留言 登录 | 立即注册


pig2 2015-12-8 17:55
推荐:

about云零基础开发、部署+理论openstack入门视频【J版及 K版】
https://item.taobao.com/item.htm?spm=a1z10.1-c.w4004-4627152322.5.DCO1Ui&id=522548227466
小凌... 2015-12-8 17:23
老大你好!求你在 百忙之中解决下我的问题,让我玩早日脱离苦海,当我在配置openstack过程中,输入swift stat,显示
/usr/lib/python2.7/site-packages/keystoneclient/service_catalog.py:196: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. Either both should be provided or neither should be provided.
  'Providing attr without filter_value to get_urls() is '
Account HEAD failed: http://controller:8080/v1/AUTH_0c3ce3f0b54c49368c4ffa287dbc6de2 401 Unauthorized
小弟不胜感激!
cuitinglei 2015-12-7 16:24
请问在FLUME下怎样配置log4j呢?
winner005 2015-11-25 11:19
15/11/25 11:08:49 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[sparkWorker-akka.actor.default-dispatcher-5,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1b7074e4 rejected from java.util.concurrent.ThreadPoolExecutor@394d48d3[Running, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 1]
        at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
        at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
        at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
        at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:211)
        at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:210)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
        at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters(Worker.scala:210)
        at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:288)
        at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
        at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$reregisterWithMaster(Worker.scala:234)
        at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:521)
        at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$processMessage(AkkaRpcEnv.scala:177)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaRpcEnv.scala:126)
        at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$safelyCall(AkkaRpcEnv.scala:197)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
        at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
        at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/11/25 11:08:49 INFO util.ShutdownHookManager: Shutdown hook called
楼主,我用CDH5的Spark  配置好后   启动worker的时候出现线程池异常,麻烦你帮忙我分析一下是怎么回事,谢谢!
noob22 2015-11-23 22:48
楼主,作为一个初学者,想要进行Openstack dashboard的定制开发,不知道开发环境如何搭建?希望能得到您的指点。谢谢!
pianai 2015-10-30 17:10
学习一下,谢谢楼主的分享~
麻瓜 2015-10-30 14:51
glance image-list  出现了
Authorization Failed: An unexpected error prevented the server from fulfilling your request. (HTTP 500)
397376972 2015-10-5 15:56
学习了
397376972 2015-10-5 15:56
学习了
whz112131 2015-9-20 15:21
留个爪子
jancan 2015-9-14 23:23
大哥,你好。请问你一个问题,这个我好久好久了

最近在用Mapreduce连Hadoop,出现各类问题。
环境:
hadoop:2.7.0
Hbase:1.0.1.1
刚开始的时候报:HBaseConfiguration 找不到,百度之,说将 hbase的lib下的jar复制到hadoop的lib下
复制之,无果,找各类参考资料修改hadoop参数,都还是报异常。
最后无奈,只能修改 hadoop-env.sh,将 hbase 的lib加入到 classpath下。
最后终于不报这个异常。
可是接着更加无解的事情发生了。
报出各类我自己定义的类找不到。
网上找遍了所有的贴,没找到答案。
    public static void main(String[] args) throws Exception
    {      
        Configuration conf = new Configuration();
        conf = HBaseConfiguration.create(conf);

        if (args.length != 6)
        {
            System.err.println("Usage: MroFormat <in-mro> <in-xdr> <sample tbname> <event tbname>");
            System.exit(2);
        }
        //makeConfig(conf, args);

        String inpath1 = args[0];
        String inpath2 = args[1];

        Job job = Job.getInstance(conf,"MyTest");
        job.setNumReduceTasks(40);        

        job.setJarByClass(Main.class);  
        job.setReducerClass(ReduceDeal.MroFormatReducer.class);   
        //job.setReducerClass(ReduceDeal.TestReducer.class);
        job.setSortComparatorClass(MapDeal.SortKeyComparator.class);
        job.setPartitionerClass(MapDeal.CellIDPartitioner.class);
        job.setGroupingComparatorClass(MapDeal.SortKeyGroupComparator.class);           
        job.setMapOutputKeyClass(CellTimeKeyPare.class);
        job.setMapOutputValueClass(Text.class);



        MultipleInputs.addInputPath(job, new Path(inpath1), KeyValueTextInputFormat.class, MapDeal.MroMapper.class);
        MultipleInputs.addInputPath(job, new Path(inpath2), TextInputFormat.class, MapDeal.XdrMapper.class);   
        job.setOutputFormatClass(MultiTableOutputFormat.class);
        //job.setOutputFormatClass(NullOutputFormat.class);

        //LOG.info(job.getPartitionerClass().getName());

        //TableMapReduceUtil.addDependencyJars(job);
        //TableMapReduceUtil.addDependencyJars(job.getConfiguration());

        //TableMapReduceUtil.initTableReducerJob("tab1", ReduceDeal.MroFormatReducer.class, job);



        System.exit(job.waitForCompletion(true) ? 0 : 1);

    }
在本机的win7环境下,代码能跑过,但打包放到服务器就报错,找不着类。去掉了“conf = HBaseConfiguration.create(conf);”这段代码后,下面的 处理类就可以找到。请大牛们帮忙看看,非常感谢
jancan 2015-9-14 21:10
大师真牛
songyl525 2015-9-8 16:14
帮我看下这个问题http://www.aboutyun.com/thread-15133-1-1.html
linhan0123 2015-9-6 14:03
你好,我想问下,openstack 上的对象存储,我实例怎么才能使用到这些对象存储上的容器或虚拟文件夹?
华信志恒 2015-8-28 15:15
OS项目采用 Ceph实现对象存储,其中需求涉及 cinder volume 的使用空间显示 , 使用 CePH rbd 命令(rbd -p volumes info vol-XXXXXX) 可获取对应的分配空间 , 但中没有已使用的空间,请问如何查看 Cinder volume的已使用空间 大小? 谢

************************
[root@vm-manager-0-0 ceph]# rbd -p volumes ls
volume-0aaec2b6-1fc5-4d3a-b36b-bc62e34a80b6
volume-893b987e-d33a-43a9-9795-bf74076997d0
volume-d8094020-9962-422e-86db-09e263dbce57
volume-fd28c169-395c-4feb-bf5a-70a87006e032

rbd -p volumes info volume-0aaec2b6-1fc5-4d3a-b36b-bc62e34a80b6
rbd image 'volume-0aaec2b6-1fc5-4d3a-b36b-bc62e34a80b6':
        size 102400 MB in 25600 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.2e2d067cf8ad1
        format: 2
        features: layering
wuxiaoyong 2015-8-21 10:53
map-site.xml:
<property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
   </property>
yarn-site.xml
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
wuxiaoyong 2015-8-21 10:49
hive中运行:insert into table wxy_hive12 select id as key,name as value from wxy12;
在8088WEB上,是报这个错
Application application_1440067483360_0014 failed 2 times due to AM Container for appattempt_1440067483360_0014_000002 exited with exitCode: 126 due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)

/bin/bash: /usr/java/jdk1.7.0_80: Is a directory
wuxiaoyong 2015-8-21 10:47
hadoop2.2+hbase0.98+hive0.13
wuxiaoyong 2015-8-21 10:47
主呀:mapreduce提交到集群后,报错,帮忙分析下呢,拜托了。requestedMemory=-1
2015-08-21 09:10:45,565 ERROR [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN CONTACTING RM.
java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=-1, maxMemory=8192
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:198)
        at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateResourceRequests(RMServerUtils.java:78)
chtt01 2015-8-20 16:26
博主,求救CDH5.3.0跑mapreduce,报错误如下,能搞定的话可长期做我们的顾问,非常感谢!:
map可以成功完成,但reduce报错,详情见附件日志。
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
        at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.lang.NoClassDefFoundError: Ljava/lang/InternalError
        at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native Method)
        at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239):
关闭

推荐上一条 /2 下一条