




版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、Hadoop 在 Windows7 操作系统下使用 Eclipse 来搭建hadoop 开发环境网上有一些都是在 Linux 下使用安装 Eclipse 来进行 hadoop 应用开发,但是大部分 Java 程序员对 linux 系统不 是那么熟悉,所以需要在 windows 下开发 hadoop 程序,所 以经过试验,总结了下如何在 windows 下使用 Eclipse 来开 发 hadoop 程序代码。1、需要下载 hadoop 的专门插件 jar 包hadoop 版本为 2.3.0 , hadoop 集群搭建在 centos6x 上面, 插件包下载地址为: http:/download
2、./detail/mchdba/8267181, jar 包名字为 hadoop-eclipse-plugin-2.3.0 ,可以适用于 hadoop2x 系列软件版本。 2、把插件包放到 eclipse/plugins 目录下为了以后方便,我这里把尽可能多的 jar 包都放进来了,如 下图所示:3、重启 eclipse ,配置 Hadoop installation directory 如果插件安装成功,打开 Windows Preferences 后, 在窗口左侧会有 Hadoop Map/Reduce 选项,点击此选项, 在窗口右侧设置 Hadoop 安装路径。4、配置
3、Map/Reduce Locations打开 Windows->Open Perspective->Other选择 Map/Reduce ,点击 OK ,在右下方看到有个Map/ReduceLocations 的图标,如下图所示:点击 Map/ReduceLocation 选项卡,点击右边小象图标, 打开 Hadoop Location 配置窗口:输入 LocationName ,任意名称即可 . 配置 Map/Reduce Master 和 DFSMastrer ,Host 和 Port 配置成与 core-site.xml 的设置一致即 可。去找 core-site.xml 配
4、置:hdfs:/name01:9000在界面配置如下: 点击 Finish 按钮,关闭窗口。点击左侧 的 DFSLocations >myhadoop (上一步配置的 location name) ,如能看到 user ,表示安装成功,但是进去看到报错 信息: Error: Permission denied: user=root,access=READ_EXECUTE,inode=/tmp;hadoop :supergroup:drwx ,如下图所示:应该是权限问题:把 /tmp/ 目录下面所有的关于 hadoop 的文件夹设置成 hadoop 用户所有然
5、后分配授予 777 权限。cd /tmp/ chmod 777/tmp/chown -Rhadoop.hadoop /tmp/hsperfdata_root之后重新连接打开 DFSLocations 就显示正常了。 Map/Reduce Master(此处为 Hadoop 集群的 Map/Reduce 地址,应该和 mapred-site.xml 中的 mapred.job.tracker 设置相同 ) ( 1): 点击报错:An internal error occurredduring: Connecting to DFS .UnknownHost
6、Exception: name01 直接在 hostname 那一栏里面设置 ip 地址为: 28 ,即 可,这样就正常打开了,如下图所示: 5 、新建 WordCount 项目File >Project ,选择 Map/Reduce Project ,输入项 目名称 WordCount 等。在 WordCount 项目里新建 class ,名称为 WordCount , 报错代码如下: Invalid Hadoop Runtime specified; please click Configure Hadoop install directory or fill
7、 in library location input field ,报错原因是目录选择不对,不能选择在跟目录 E:hadoop 下,换成 就可以了,如下所示:一路下一步过去,点击 Finished 按钮,完成工程创建, Eclipse 控制台下面出现如下信息:14-12-9 下午 04 时 03 分 10 秒 : Eclipse is running in a JRE, but a JDK is requiredSome Maven plugins may not work when importing projects or updating source folders.14-12-9 下
8、午 04 时 03 分 13 秒 : Refreshing /WordCount/pom.xml14-12-9 下午 04 时 03 分 14 秒 : Refreshing /WordCount/pom.xml14-12-9 下午 04 时 03 分 14 秒 : Refreshing /WordCount/pom.xml14-12-9 下午 04 时 03 分 14 秒 : Updating index central|/maven214-12-9 下午 04 时 04 分 10 秒 : Updated index for central|http:
9、//maven26, Lib 包导入:需要添加的 hadoop 相应 jar 包有:/hadoop-2.3.0/share/hadoop/common 下所有 jar 包,及里 面的 lib 目录下所有 jar 包,/hadoop-2.3.0/share/hadoop/hdfs 下所有 jar 包,不包括里 面 lib 下的 jar 包,/hadoop-2.3.0/share/hadoop/mapreduce 下所有 jar 包,不 包括里面 lib 下的 jar 包,/hadoop-2.3.0/share/hadoop/yarn 下所有 jar 包,不包括里 面
10、 lib 下的 jar 包,大概 18 个 jar 包左右。7, Eclipse 直接提交 mapreduce 任务所需要环境配置代码 如下所示: package wc;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;importorg.apache.hadoop.io.Text;import
11、 org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;i mport org.apache.hadoop.mapreduce.lib.output.FileOutputForm at;import org.apache.hadoop.util.GenericOptionsParser;publi
12、c class W2 public static class TokenizerMapperextendsMapper<Object, Text, Text,IntWritable> private final static IntWritable one= new IntWritable(1);private Text word = newText(); public void map(Object key, Text value, Context context) throws IOException,InterruptedException StringTokenizer i
13、tr =new StringTokenizer(value.toString(); while (itr.hasMoreTokens() word.set(itr.nextToken(); context.write(word, one); public static class IntSumReducer extendsReducer<Text, IntWritable, Text, IntWritable>private IntWritable result = new IntWritable();public void reduce(Text key, Iterable<
14、;IntWritable>values,Context context) throws0;int sum =IOException, InterruptedException for (IntWritable val : values)sum += val.get();result.set(sum);context.write(key,result);public static void main(Stringargs) throws Exception Configuration conf =new Configuration(); System.setProperty(8、运行8.1
15、 、在 HDFS 上创建目录 inputhadoopname01hadoop-2.3.0$ hadoop fs -ls /hadoop fs -mkdirhadoopname01 hadoop-2.3.0$input mkdir: input: No such file or directoryhadoopname01 hadoop-2.3.0$ PS :fs 需要全目录的方 式来创建文件夹如果 Apache hadoop 版本是 0.x 或者 1.x, bin/hadoop hdfs fs -mkdir -p /in bin/hadoop hdfs fs -put /home/du/inpu
16、t /in如果 Apache hadoop 版本是 2.x.bin/hdfs dfs-mkdir -p /inbin/hdfs dfs-put /home/du/input /in如果是发行版的hadoop, 比如 Cloudera CDH,IBMBI,Hortonworks HDP 则第一种命令即可。要注意创建目录 的全路径。另外 hdfs 的根目录是 /2、拷贝本地 README.txt 到 HDFS 的 input 里hadoopname01 hadoop-2.3.0$ find . -name README.txt./share/doc/hadoop/common/README.txt
17、hadoopname01 $ hadoop fs -copyFromLocal ./src/hadoop-2.3.0/share/doc/hadoop/common/README.t xt /data/inputhadoopname01 $hadoopname01 $ hadoop fs -ls /Found 2 items0drwxr-xr-x - hadoop supergroup2014-12-15 23:34 /data88 2014-08-26结束后, 查看输出结-rw-r-r- 3 hadoop supergroup02:21 /inputYou have new mail in
18、/var/spool/mail/roothadoopname01 $3 ,运行 hadoop 果(1 ),直接在 hadoop 服务器上面查看hadoopname01 $ hadoop fs -ls /data/Found 2 itemsdrwxr-xr-x - hadoop supergroup2014-12-15 23:29 /data/inputdrwxr-xr-x - hadoop supergroup2014-12-15 23:34 /data/output0ue:|iwcqo 外 000一0 乙 Z6Ml79ZMeoo|qo!/6u!6eiS70ZZ 68992 LdPBLl/Bu
19、!6eis/pejdeuj/doopeq-doopeq/duJV:9|!j-(gg乙)人odocipeo|:e/eruo!ien6!juoo) uonejn6yuoo juoo uieiu NdVM Z9fS0:9L 9L-SL-L02L000_0SZ689l79ZUeoorqor:qo0j suo”o 6u!口!LUqns - (6Z讨suo”o丄屮!(3八可0科111 qnsqop) euiuuqnsqoeonpejdeuj ujeiu OdNI SfS0:9L 9/即FLO乙l/s*|ds joequunu - (96)|6ujeju|qorjiujqns:eAefjew!Ujqnsqo
20、p) euiuuqnsqoeonpejdeuj uieuu OdNI 0乙乙0巾9L-SL-L0SL : sseoojd 0 siRed 川du!囘o丄- (ZO乙)sn印sS!|:EAe仃eujojnduieiij) teujojinduieiijindui ujeiu OdNI巾:和 9L-SL-L0S=P|uoisses科oe丄qo=OLUENSSOOOd qjiM souje|/| Buiziieiiui -(9z)l!U!:eAersouje|/|UJAr) souje|/|LUAr ujAf uieuu OdNI 60fL0:9L 9kSLOSPI-uojsses souteiu
21、sjp esn fpeejsu| peteoejdep si pi uoisses -(966)P9lBO9Jd9Q)l9OUOUJBM:BABrunBjn6yuoo) uoijeoejdepuoijejnByuoo ujeiu OdNI 0fL0:9L 9/即FLO乙 冒割垦皐丁呂磚 墓丑( )esdi|03 辛(乙)$LOeudoopeq attempt to override final parameter: erval; Ignoring.2014-12-16 15:34:02,368 WARN m
22、ain conf.Configuration (Configuration.java:loadProperty(2345)file:/tmp/hadoop-hadoop/mapred/staging/hadoop1764589 720/.staging/job_local1764589720_0001/job.xml:an attempt to override final parameter:mapreduce.job.end-notification.max.attempts;Ignoring.2014-12-16 15:34:02,682 WARN main conf.Configura
23、tion (Configuration.java:loadProperty(2345) file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoo p/job_local1764589720_0001/job_local1764589720_0001. xml:an attempt to override final parameter: erval;Ignoring.2014-12-16 15:34:02,682 WARN main conf.Config
24、uration (Configuration.java:loadProperty(2345)file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoo p/job_local1764589720_0001/job_local1764589720_0001. xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.2014-12-16 15:34:02,703 INFO main mapreduce.Job
25、(Job.java:submit(1289) - The url to track the job: http:/localhost:8080/2014-12-16 15:34:02,704 INFO main mapreduce.Job (Job.java:monitorAndPrintJob(1334) - Running job: job_local1764589720_00012014-12-16 15:34:02,707 INFO Thread-4 mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471
26、) - OutputCommitter set in config null2014-12-16 15:34:02,719 INFO Thread-4 mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCom mitter2014-12-16 15:34:02,853 INFO Thread-4 mapred.LocalJobRunner (LocalJobRunne
27、r.java:runTasks(448) - Waiting for map tasks2014-12-16 15:34:02,857 INFO LocalJobRunner Map Task Executor #0 mapred.LocalJobRunner (LocalJobRunner.java:run(224) - Starting task: attempt_local1764589720_0001_m_000000_02014-12-16 15:34:02,919 INFO LocalJobRunner Map Task Executor #0 util.ProcfsBasedPr
28、ocessTree (ProcfsBasedProcessTree.java:isAvailable(129) - ProcfsBasedProcessTree currently is supported only on Linux.2014-12-16 15:34:03,281 INFO LocalJobRunner Map Task Executor #0 mapred.Task (Task.java:initialize(581) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBas
29、edProcessTree 2e1022ec2014-12-16 15:34:03,287 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:runNewMapper(733) - Processing split: hdfs:/28:9000/data/input/README.txt:0+136 62014-12-16 15:34:03,304 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.ja
30、va:createSortingCollector(388) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer2 014-12-16 15:34:03,340 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:setEquator(1181) - (EQUATOR) 0 kvi 26214396(104857584)2014-12-16 15:34:03,341 INFOLocalJobRunne
31、r Map Task Executor #0 mapred.MapTask (MapTask.java:init(975) - mapreduce.task.io.sort.mb: 1002014-12-16 15:34:03,341 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:init(976) - soft limit at 838860802014-12-16 15:34:03,341 INFO LocalJobRunner Map Task Executor #0 mapred.MapTas
32、k (MapTask.java:init(977) - bufstart = 0; bufvoid = 1048576002014-12-16 15:34:03,341 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:init(978) - kvstart = 26214396; length = 65536002014-12-16 15:34:03,708 INFO main mapreduce.Job (Job.java:monitorAndPrintJob(1355) - Job job_loca
33、l1764589720_0001 running in uber mode : false2014-12-16 15:34:03,710 INFO main mapreduce.Job (Job.java:monitorAndPrintJob(1362) - map 0% reduce 0%2014-12-16 15:34:04,121 INFO LocalJobRunner Map Task Executor #0 mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591) - 2014-12-16 15:34:04,128 IN
34、FO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:flush(1435) - Starting flush of map output2014-12-16 15:34:04,128 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:flush(1453) - Spilling map output2014-12-16 15:34:04,128 INFO LocalJobRunner Map Task Executor #
35、0 mapred.MapTask (MapTask.java:flush(1454) - bufstart = 0; bufend = 2055; bufvoid = 1048576002014-12-16 15:34:04,128 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:flush(1456) - kvstart = 26214396(104857584); kvend = 26213684(104854736); length = 713/65536002014-12-16 15:34:04
36、,179 INFO LocalJobRunner Map Task Executor #0 mapred.MapTask (MapTask.java:sortAndSpill(1639) - Finished spill 02014-12-16 15:34:04,194 INFO LocalJobRunner Map Task Executor #0 mapred.Task (Task.java:done(995) - Task:attempt_local1764589720_0001_m_000000_0 is done. And is in the process of committin
37、g2014-12-16 15:34:04,207 INFO LocalJobRunner Map Task Executor #0 mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591) -map2014-12-16 15:34:04,208 INFO LocalJobRunner Map Task Executor #0 mapred.Task (Task.java:sendDone(1115) - Task attempt_local1764589720_0001_m_000000_0 done.2014-12-16 15:
38、34:04,208 INFO LocalJobRunner Map Task Executor #0 mapred.LocalJobRunner (LocalJobRunner.java:run(249) - Finishing task: attempt_local1764589720_0001_m_000000_02014-12-16 15:34:04,208 INFO Thread-4 mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456) - map task executor complete.2014-12-16 15:34
39、:04,211 INFO Thread-4 mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448) - Waiting for reduce tasks2014-12-16 15:34:04,211 INFO pool-6-thread-1 mapred.LocalJobRunner (LocalJobRunner.java:run(302) - Starting task: attempt_local1764589720_0001_r_000000_02014-12-16 15:34:04,221 INFO pool-6-thread
40、-1 util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129) - ProcfsBasedProcessTree currently is supported only on Linux.2014-12-16 15:34:04,478 INFO pool-6-thread-1 mapred.Task (Task.java:initialize(581) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBa
41、sedProcessTree 361546152014-12-16 15:34:04,483 INFO pool-6-thread-1 mapred.ReduceTask (ReduceTask.java:run(362) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shufflee2b 02a32014-12-16 15:34:04,500 INFO pool-6-thread-1 reduce.MergeManagerImpl (MergeManagerImpl.java:<init&g
42、t;(193) - MergerManager: memoryLimit=949983616, maxSingleShuffleLimit=237495904, mergeThreshold=626989184, ioSortFactor=10, memToMemMergeOutputsThreshold=102014-12-16 15:34:04,503 INFO EventFetcher for fetching Map Completion Events reduce.EventFetcher (EventFetcher.java:run(61) - attempt_local17645
43、89720_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events2014-12-16 15:34:04,543 INFO localfetcher#1 reduce.LocalFetcher(LocalFetcher.java:copyMapOutput(140) - localfetcher#1 about to shuffle output of map attempt_local1764589720_0001_m_000000_0 decomp: 1832 len: 1836 to
44、MEMORY2014-12-16 15:34:04,548 INFO localfetcher#1 reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100) - Read 1832 bytes from map-output for attempt_local1764589720_0001_m_000000_02014-12-16 15:34:04,553 INFO localfetcher#1 reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(30
45、7) - closeInMemoryFile -> map-output of size: 1832, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->18322014-12-16 15:34:04,564 INFO EventFetcher for fetching Map Completion Events reduce.EventFetcher (EventFetcher.java:run(76) - EventFetcher is interrupted. Returning2014
46、-12-16 15:34:04,566 INFO pool-6-thread-1 mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591) - 1 / 1 copied.2014-12-16 15:34:04,566 INFO pool-6-thread-1 reduce.MergeManagerImpl(MergeManagerImpl.java:finalMerge(667) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs20
47、14-12-16 15:34:04,585 INFO pool-6-thread-1 mapred.Merger (Merger.java:merge(589) - Merging 1 sorted segments2014-12-16 15:34:04,585 INFO pool-6-thread-1 mapred.Merger (Merger.java:merge(688) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes2014-12-16 15:34:04,605 INFO poo
48、l-6-thread-1 reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(742) - Merged 1 segments, 1832 bytes to disk to satisfy reduce memory limit2014-12-16 15:34:04,605 INFO pool-6-thread-1 reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772) - Merging 1 files, 1836 bytes from disk2014-12
49、-16 15:34:04,606 INFO pool-6-thread-1 reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787) - Merging 0 segments, 0 bytes from memory into reduce2014-12-16 15:34:04,607 INFO pool-6-thread-1 mapred.Merger (Merger.java:merge(589) - Merging 1 sorted segments2014-12-16 15:34:04,608 INFO pool-6-
50、thread-1 mapred.Merger (Merger.java:merge(688) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes2014-12-16 15:34:04,608 INFO pool-6-thread-1 mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591) - 1 / 1 copied.2014-12-16 15:34:04,643 INFO pool-6-thread-1 Configurat
51、ion.deprecation (Configuration.java:warnOnceIfDeprecated(996) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords2014-12-16 15:34:04,714 INFO main mapreduce.Job (Job.java:monitorAndPrintJob(1362) - map 100% reduce 0%2014-12-16 15:34:04,842 INFO pool-6-thread-1 mapred.Task (Task.ja
52、va:done(995) - Task:attempt_local1764589720_0001_r_000000_0 is done. And is in the process of committing2014-12-16 15:34:04,850 INFO pool-6-thread-1 mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591) - 1 / 1 copied.2014-12-16 15:34:04,850 INFO pool-6-thread-1 mapred.Task (Task.java:commit(
53、1156) - Task attempt_local1764589720_0001_r_000000_0 is allowed to commit now2014-12-16 15:34:04,881 INFO pool-6-thread-1 output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439) - Saved output of task attempt_local1764589720_0001_r_000000_0 to hdfs:/28:9000/data/output/_temporary/0/task _local1764589720_0001_r_0000002014-12-16 15:34:04,884 INFO pool-6-thread-1 mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591) - reduce > reduce2014-12-16 15:34:04,884 INFO pool-6-thread-1 mapred.Task (Task.java:sendDone(1115) - Task attempt_local1764589720_0001_
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 远程工作技能需求-洞察与解读
- 2025第二季度福建中共福州市马尾区委员会办公室招聘编外人员1人模拟试卷及答案详解(易错题)
- 2025广西崇左市人民检察院公开招聘机关文员4人考前自测高频考点模拟试题及答案详解(有一套)
- 2025辽宁鞍山市立山区教育局面向应届毕业生校园招聘2人模拟试卷及参考答案详解1套
- 2025贵州铜仁职业技术学院引进博士研究生15人模拟试卷及完整答案详解1套
- 2025年河北邯郸丛台区公开选聘农村党务(村务)工作者42名模拟试卷带答案详解
- 2025昆明市禄劝县人民法院司法协警招录(2人)考前自测高频考点模拟试题含答案详解
- 供应链透明度提升策略-第23篇-洞察与解读
- 2025广西平果市新安镇人民政府城镇公益性岗位人员招聘2人模拟试卷及答案详解(网校专用)
- 2025广东东莞市水务局招聘聘用人员2人考前自测高频考点模拟试题附答案详解(典型题)
- 墩柱安全教育培训课件
- 新版中华民族共同体概论课件第十五讲新时代与中华民族共同体建设(2012- )-2025年版
- 卫生监督协管五项制度范文(4篇)
- 2025中国低压电能质量市场白皮书
- 2025年全国《家庭教育指导师》考试模拟试题(附答案)
- 航空安全培训计划课件
- 电瓶搬运车安全培训课件
- 数据保护与安全知识培训课件
- 情报共享平台架构-洞察及研究
- 棉纱库存管理办法
- 量具借用管理办法
评论
0/150
提交评论