05毕业设计论文.doc

基于单片机的智能家居系统【自动化毕业论文开题报告外文翻译说明书】.zip

收藏

压缩包内文档预览:(预览前15页/共37页)
预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图
编号:22399320    类型:共享资源    大小:5.84MB    格式:ZIP    上传时间:2019-10-16 上传人:小*** IP属地:福建
50
积分
关 键 词:
自动化毕业论文开题报告外文翻译说明书 基于单片机的智能家居 系统开题报告 毕业论文基于 单片机的智能家居 毕业设计开题报告 的智能家居系统【毕业论文 智能家居毕业设计开题报告 智能家居系统的设计
资源描述:
基于单片机的智能家居系统【自动化毕业论文开题报告外文翻译说明书】.zip,自动化毕业论文开题报告外文翻译说明书,基于单片机的智能家居,系统开题报告,毕业论文基于,单片机的智能家居,毕业设计开题报告,的智能家居系统【毕业论文,智能家居毕业设计开题报告,智能家居系统的设计
内容简介:
毕 业 设 计(论 文)任 务 书1本毕业设计(论文)课题应达到的目的: 通过毕业设计,使学生受到自动化工程师所必备的综合训练,在不同程度上提高各种设计及应用能力,具体包括以下几方面:1. 调查研究、中外文献检索与阅读的能力。2. 综合运用专业理论、知识分析解决实际问题的能力。3. 定性与定量相结合的独立研究与论证的能力。4. 实验方案的制定、仪器设备的选用、调试及实验数据的测试、采集与分析处理的能力。5. 设计、计算与绘图的能力,包括使用计算机的能力。6. 逻辑思维与形象思维相结合的文字及口头表达的能力。2本毕业设计(论文)课题任务的内容和要求(包括原始数据、技术要求、工作要求等): 1.本课题是设计智能家居系统,采用AT89C51系列单片机,分成电源模块、信息获取模块、报警模块、液晶显示模块四部分。设计方法大致如下:根据系统的要求,首先确定出系统是采用开环系统还是闭环系统,或者是数据处理系统。选择检测元件,在确定总体方案时,必须首先选择好被测参数的测量元件,它是影响控制系统精度的重要因素之一。选择执行机构,执行机构是微型机控制系统的重要组成部件之一。执行机构的选择一方面要与控制算法匹配,另一方面要根据被控对象的实际情况确定,最终画出整个系统流程图和原理图。2.设计系统的硬件电路和软件程序,包括详细的硬件设备配置,系统连接,程序调试等详细步骤;3.最终完成一篇符合金陵科技学院毕业论文规范的系统技术文档,包括各类技术资料,电路图纸,程序等;4.系统要有实际的硬件展示,并能够通电运行;毕 业 设 计(论 文)任 务 书3对本毕业设计(论文)课题成果的要求包括图表、实物等硬件要求: 1.按期完成一篇符合金陵科技学院论文规范的毕业设计说明书(毕业论文),能详细说明设计步骤和思路;2.能有结构完整,合理可靠的技术方案;3.能有相应的电气部分硬件电路设计说明;4.有相应的图纸和技术参数说明。5.要求调试成功,并在答辩时完成实际系统展示。4主要参考文献: 1.王俊峰,孟令启.现代传感器应用技术M.北京:机械工业出版社,2007.2.张萍. 基于STC52单片机的智能家居系统设计J.电子制作,2015,(4).3.陈荣坤. 基于STC12C5A60S2单片机的智能家居环境监控系统的设计与实现J.智能计算机与应用,2015,(3).4.禹谢华. 基于单片机的新型智能家居无线双道报警系统WDAS的设计J.山东农业大学学报, 2014,(5).5.何立民. 单片机高级教程(第1版)M北京:北京航空航天大学出版社,2001.6.赵晓安. MCS-51单片机原理及应用M. 天津:天津大学出版社,2001.7.李广第 单片机基础M北京:北京航空航天大学出版社,1999.8.徐惠民,安德宁 单片微型计算机原理接口与应用M 北京:北京邮电大学出版社,1996。9.金发庆.传感器技术与应用M.北京:机械工业出版社,2006.10.夏继强. 单片机实验与实践教程M. 北京:北京航空航天大学出版社, 2001. 11.孙洪程.过程控制工程设计(第2版)M.北京:化学工业出版社,2009.12.魏克新.自动控制综合应用技术M.北京:机械工业出版社,2007.13.白志刚.自动调节系统解析与PID整定M.北京:化学工业出版社,2012.14.沈聿农.传感器及应用技术M.北京:化学工业出版社,2001.15.范晶彦.传感器与检测技术应用M.北京:机械工业出版社,2005.毕 业 设 计(论 文)任 务 书5本毕业设计(论文)课题工作进度计划:2015.11.102015.12.13调研、收集相关资料、对学生进行初步辅导,拟题、选题、填写任务书。2015.12.152015.12.31学生查看任务书,为毕业设计的顺利完成,进行前期准备。12月31日前正式下发任务书。12月21日两个系提交专业选题分析总结(撰写要求详见对内通知中附件2)2016.01.092016.04.05学生在指导教师的具体指导下进行毕业设计创作;拟定论文提纲或设计说明书(下称文档)提纲;撰写及提交开题报告、外文参考资料及译文、论文大纲;在2016年4月5日前学生要提交基本完成的毕业设计创作成果以及文档的撰写提纲,作为中期检查的依据。指导教师指导、审阅,定稿由指导教师给出评语,对论文主要工作未通过的学生下发整改通知。2016.04.062016.04.10提交中期课题完成情况报告给指导教师审阅;各专业组织中期检查(含毕业设计成果验收检查)。2016.04.112016.05.10进行毕业设计文档撰写;2016年5月8日为学生毕业设计文档定稿截止日。2016年5月9日-13日,指导教师和评阅教师通过毕业设计(论文)管理系统对学生的毕业设计以及文档进行评阅,包括打分和评语。5月1日前,做好答辩安排,通知学生回校进行答辨2016.05.142016.05.15查看答辩安排,毕业设计(论文)小组答辩2016.05.162016.05.29对未通过答辨的学生进行二次答辨完成毕业设计的成绩录入2016.05.302016.06.07根据答辩情况修改毕业设计(论文)的相关材料,并在毕业设计(论文)管理系统中上传最终稿,并且上交纸质稿。2016年6月7日为学生毕业设计文档最终稿提交截止日。2016.06.072016.06.30各系提交本届毕业设计(论文)的工作书面总结及相关材料。所在专业审查意见:通过负责人: 2015 年 12 月21 日 毕 业 设 计(论文) 开 题 报 告 1结合毕业设计(论文)课题情况,根据所查阅的文献资料,每人撰写不少于1000字左右的文献综述: 智能家居作为家庭信息化的实现方式,已经成为社会信息化发展的重要组成部分,物联网因其巨大的应用前景,将是智能家居产业发展过程中一个比较现实的突破口,对智能家居的产业发展具有重大意义。 1智能家居的概念智能家居(Smart Home)是以家为平台,兼备建筑、自动化,智能化于一体的高效、舒适、安全、便利的家居环境。家居智能化技术起源于美国,最具代表性的是X-10技术,通过X-10通信协议,网络系统中的各个设备便可实现资源的共享。因其布线简单、功能灵活,扩展容易而被人们广泛接受和应用。至今,X-10技术产品的销售已超过两亿个,仅在美国一个国家,便有超过600万个家庭在使用。自动化的智能家居不再是一幢被动的建筑,相反,成了帮助主人尽量利用时间的工具,使家庭更为舒适、安全、高效和节能。智能家居是现代社会最热门的话题之一,它的目标是通过网络等信息通信技术手段实现对家居电器等的智能控制,使其能够按照人们的设定工作运行,而不论距离的远近。智能化与远程控制是智能家居的两大特点。目前,已经有越来越多的机构和个人开始了对智能家居的研究 随着网络技术的发展,特别是无线网络的发展,网络化智能家居系统可提供遥控、家电(空调,热水器等)控制、照明控制、室内外遥控、窗帘自控、防盗报警、电话远程控制、可编程定时控制及计算机控制等多种功能和手段,使生活更加舒适、便利和安全。 2智能家居控制系统功能智能家庭控制系统的主要功能包括家庭设备自动控制、家庭安全防范二个方面。其中家庭设备自动监控包括电器设备的集中、遥控、远距离异地(通过电话或Internet)的监视、控制及数据采集。 (1)家用电器的监视和控制,按照预先所设定程序的要求对热水器、微波炉、视像音响等家用电器进行监视和控制。 (2) 热能表、燃气表、水表、电度表的数据采集、计量和传送根据小区物业管理的要求所设置数据采集程序,通过传感器对热能表、燃气表、水表、电度表的用量进行自动数据采集、计量,并将采集结果远程传送给小区物业管理系统。 (3)空调机的监视、调节和控制,按照预先所设定的程序,根据时间、温度、湿度等参数对空调机进行监视、调节和控制。 (4)照明设备的监视、调节和控制按照预先设定的时间程序,分别对各个房间照明设备的开、关进行控制,并可自动调节各个房间的照度。 (5)窗帘的控制,按照预先设定的时间程序,对窗帘的开启/关闭进行控制。 3参考文献: 1 王东峰等.单片机C语言应用100例M.北京:电子工业出版社,2009; 2 陈海宴.51单片机原理及应用M.北京:北京航空航天大学出版社.2010; 3 胡汉才.单片机原理及接口技术M.北京:清华大学出版社.1996; 4 高稚允,高 岳.光电检测技术M.北京:国防工业出版社.1983; 5 钟富昭等.8051单片机典型模块设计与应用M.北京:人民邮电出版社.2007; 6 李 平等.单片机入门与开发M.北京:机械工业出版社.2008; 7 梁 森,王侃夫,黄杭美.自动检测与转换技术M.北京:机械工业出版社.2007; 8 Meehan Joanne,Muir Lindsey. SCM in Merseyside SMEs:Benefits and barriersJ. TQM Journal. 2008; 9 Yeager Brent. How to troubleshoot your electronic scaleJ.Powder and Bulk Engineering. 1995; 10 杨清梅,孙建民.传感器与测试技术M.哈尔滨:哈尔滨工程大学出版社.2005; 11 康华光.模拟电子技术基础M.北京:高等教育出版社.2006; 12 高吉祥.全国大学生电子设计竞赛培训系列教程M.北京:电子工业出版社.2007; 13 李增国.传感器与检测技术M.北京:北京航空航天大学出版社.2009; 14 秦 龙.MSP430单片机常用模块与综合系统实例精讲M.北京:电子工业出版社.2007; 15 宋文绪,杨 帆.自动检测技术M.北京:高等教育出版社.2000; 16 王俊峰,孟令启.现代传感器应用技术M.北京:机械工业出版社.2007; 17 魏小龙MSP430系列单片机接口技术及系统设计实例M.北京:北京航空航天大学出版。 毕 业 设 计(论文) 开 题 报 告 2本课题要研究或解决的问题和拟采用的研究手段(途径): 1本毕业设计(论文)课题应达到的目的:通过毕业设计,使学生受到电气工程师所必备的综合训练,在不同程度上提高各种设计及应用能力,具体包括以下几方面: 1. 调查研究、中外文献检索与阅读的能力。 2. 综合运用专业理论、知识分析解决实际问题的能力。 3. 定性与定量相结合的独立研究与论证的能力。 4. 实验方案的制定、仪器设备的选用、调试及实验数据的测 试、采集与分析处理的能力。 5. 设计、计算与绘图的能力,包括使用计算机的能力。 6. 逻辑思维与形象思维相结合的文字及口头表达的能力。 2本毕业设计(论文)课题任务的内容和要求(包括原始数据、技术要求、工作要求等): 1.设计采用AT89C51系列单片机,分成电源模块、信息获取模块、报警模块、液晶显示模块四部分。设计方法大致如下:根据系统的要求,首先确定出系统是采用开环系统还是闭环系统,或者是数据处理系统。选择检测元件,在确定总体方案时,必须首先选择好被测参数的测量元件,它是影响控制系统精度的重要因素之一。选择执行机构,执行机构是微型机控制系统的重要组成部件之一。执行机构的选择一方面要与控制算法匹配,另一方面要根据被控对象的实际情况确定,最终画出整个系统流程图和原理图。 2.设计系统的硬件电路和软件程序,包括详细的硬件设备配置,系统连接,程序调试等详细步骤; 3.最终完成一篇符合金陵科技学院毕业论文规范的系统技术文档,包括各类技术资料,电路图纸,程序等; 4.系统要有实际的硬件展示,并能够通电运行; 5.本子系统要与整个系统能够配合运行; 6.能够完成各项任务,参加最后的毕业设计答辩。 毕 业 设 计(论文) 开 题 报 告 指导教师意见:1对“文献综述”的评语:综述内容较为丰富,参考文献合理,概括了单片机的智能家居系统的相关背景、基础知识、历史发展等,同时还对本课题所研究的任务进行了一定的阐述,对本课题的研究有一定的指导意义。2对本课题的深度、广度及工作量的意见和对设计(论文)结果的预测:本课题研究的任务是对单片机的智能家居系统进行设计,是对电气控制领域的一个实例进行探讨,技术相对成熟,深度中等,但是涉及到的知识面较广,例如传感器技术、电子模块、触摸屏等技术,学生可以通过实例调研,查阅专业资料,进行系统调试,来实现最终的设计任务和结果,并对自己的专业应用能力是一个非常大的提高。3.是否同意开题: 同意 不同意 指导教师: 2016 年 03 月 30 日所在专业审查意见:同意 负责人: 2016 年 03 月 30 日 Progress in ComputersMaurice WilkesPrestige Lecture delivered to IEE, Cambridge, on 5 February 2009 The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical perly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practiceConsolidation in the 1960s By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly. Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry going the other way. As time goes on people get more for their money, not less. Research in Computer Hardware. The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic. The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks. Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring. The RISC Movement and Its Aftermath Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designers intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm. The x86 Instruction Set Little is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intels major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation. There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed. In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Pattersons books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8. The IA-64 instruction set. Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite. Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer. In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry. AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD. The Relentless Drive towards Smaller Transistors The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed. There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down. Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market. However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk. The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of Law In March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphys law.or Sods law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time. The single-chip computer At each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time. Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it. From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion. Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputers Murphys Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easy Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern. The Semiconductor Road Map The extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner. The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort. Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title Roadmap was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series. Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moores law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers. By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner. I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome. There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known. It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less. However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。Insulating layers in the most advanced chips are now approaching a thickness equal to that of 5 atoms. Beyond finding better insulating materials, and that cannot take us very far, there is nothing we can do about this. We may also expect to face problems with on-chip wiring as wire cross sections get smaller. These will concern heat dissipation and atom migration. The above problems are very fundamental. If we cannot make wires and insulators, we cannot make a computer, whatever improvements there may be in the CMOS process or improvements in semiconductor materials. It is no good hoping that some new process or material might restart the merry-go-round of the density of transistors doubling every eighteen months. I said above that there is a general expectation that shrinkage would continue by one means or another to 45 nm or even less. What I had in mind was that at some point further scaling of CMOS as we know it will become impracticable, and the industry will need to look beyond it. Since 2001 the Roadmap has had a section entitled emerging research devices on non-conventional forms of CMOS and the like. Vigorous and opportunist exploitation of these possibilities will undoubtedly take us a useful way further along the road, but the Roadmap rightly distinguishes such progress from the traditional scaling of conventional CMOS that we have been used to.Advances in Memory Technology Unconventional CMOS could revolutionalize memory technology. Up to now, we have relied on DRAMs for main memory. Unfortunately, these are only increasing in speed marginally as shrinkage continues, whereas processor chips and their associated cache memory continue to double in speed every two years. The result is a growing gap in speed between the processor and the main memory. This is the memory gap and is a current source of anxiety. A breakthrough in memory technology, possibly using some form of unconventional CMOS, could lead to a major advance in overall performance on problems with large memory requirements, that is, problems which fail to fit into the cache.Perhaps this, rather than attaining marginally higher basis processor speed will be the ultimate role for non-conventional CMOS.Shortage of Electrons Although shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done.notably by Haroon Amed and his team working in the Cavendish Laboratory.on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated. 译文微型计算机发展简史IEE的论文 剑桥大学,2009/2/5莫里斯威尔克斯第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。最初实验用的计算机是由象我一样有着广博知识的人构造的。我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。后来,被证明是正确的,尽管我们也要学习很多新东西。最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。在电路的设计过程中,我们经常陷入两难的境地。举例来说,我可以使用真空二级管做为门电路,就象在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。60年代的巩固阶段60年代初,个人英雄时代结束了,计算机真正引起了重视。世界上的计算机数量已经增加了许多,并且性能比以前更加可靠。这些我认为归因与高级语言的起步和第一个操作系统的诞生。分时系统开始起步,并且计算机图形学随之而来。综上所述,晶体管开始代替正空管。这个变化对当时的工程师们是个不可回避的挑战。他们必须忘记他们熟悉的电路重新开始。只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。小规模集成电路和小型机很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。由此出现了我们所知道7400系列微机。每个门电路或翻转电路是相互独立的并且有自己的引脚。他们可通过导线连接在一起,作成一个计算机或其他的东西。这些芯片为制造一种新的计算机提供了可能。它被称为小型机。他比大型机稍逊,但功能强大,并且更能让人负担的起。一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。微机的出现解决了这个局面。计算消耗的下降并非起源与微机,它本来就应该是那个样子。这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。随着时间的推移,人们比他们付出的金钱得到的更多。硬件的研究我所描述的时代对于从事计算机硬件研究的人们是令人惊奇的时代。7400系列的用户能够工作在逻辑门和开关级别并且芯片的集成度可靠性比单独晶体管高很多。大学或各地的研究者,可以充分发挥他们的想象力构造任何微机可以连接的数字设备。在剑桥大学实验室力,我们构造了CAP,一个有令人惊奇逻辑能力的微机。7400在70年代中期还不断发展壮大,并且被宽带局域网的先驱组织Cambridge Ring所采用。令牌环设计研究的发表先于以太网。在这两种系统出现之前,人们大多满足于基于电报交换机的本地局域网。令牌环网需要高可靠性,由于脉冲在令牌环中传递,他们必须不断的被放大并且再生。是7400的高可靠性给了我们勇气,使得我们着手Cambridge Ring.项目。精简指令计算机的诞生早期的计算机有简单的指令集,随着时间的推移,商业用微机的设计者增加了另外的他们认为可以微机性能的特性。很少的测试方法被建立,总的来说特性的选取很大程度上依赖于设计者的直觉。1980年,RISC运动改变了微机世界。该运动是由Patterson 和 Ditzel发表了一篇命名为精简指令计算机的情况论文而引起的。除了RISC这个引人注目缩略词外,这个标题传达了一些指令集合设计的见解,随之引发了RISC运动。从某种意义上说,它推动了线程的发展,在处理器中,同一时间有几个指令在不同的执行阶段称为线程。线程不是个新概念,但是它对微机来说是从未有过的。RISC受益于一个最近的可用的方法的诞生,该方法使估计计算机性能成为可能而不去真正实现该微机的设计。我的意思是说利用目前存在的功能强大的计算机去模拟新的设计。通过模拟该设计,RISC的提倡者能够有信心的预言,一台使用和传统计算机相同电路的RISC计算机可以和传统的最好的计算机有同样的性能。模拟仿真加快了开发进度并且被计算机设计者广泛采用。随后,计算机设计者变的多些可理性少了一些艺术性。今天,设计者们希望有满屋可用计算机做他们的仿真,而不只是一台,X86指令集除非出现很大意外,要不很少听到有计算机使用早期的RISC指令集了。INTEL 8086及其后裔都与x86密切相关。X86构架已经占据了计算机核心指令集的主导地位。被认为是相当成功的RISC指令集现在的生存空间越来越小了。对于我们这些从事计算机学术研究的人,X86的统治地位让我们感到失望。毫无疑问,商业上对于x86的生存会有更多的考虑,但是这里还有很多原因,尽管我们多么希望人们考虑其他的方面。高级语言并没有完全消除对机器原始编码的的使用。我们仍需要不断提醒我们自己:我们应该严格的与先前的应用在机器层面上保持兼容。然而,情况也许有所不同,如果Intel的主要目的是为是生产一个好的RISC芯片。有一个已经取得了更大的成功,我所说的i860(不是i960,它们有一些不同)。从许多方面来说,i860是个卓越的芯片,但是它的软件借口不适合在工作站上应用。对于x86取得胜利的最后有一件有意思的事情。直接应用先前x86的实现方式对于满足RISC处理器的持续增长的速度要求,是不可能的。因此,设计者们没有完全实现RISC指令集,尽管这不是很明显。表面上,一片现代的x86芯片包含了隐藏实现的部分,好象和实现RISC指令集的芯片一样。当致命的异常发生时,X86引入的代码是,经过适当的篡改后,被转化为它的内部代码并且被RISC芯片处理。对于以上RISC运动的总结,我非常信赖最新版本的哈里斯和培生出版社的有关计算机设计的书籍。请参考特殊计算机体系构造,第三版,2003,P146,151-4,157-8IA-64指令集很久以前,Intel 和 Hewlett-Packard引进了IA-64指令集。这最初主要是为了满足通常的64位地址空间问题。在这种情况下,随后出现了MIPS R4000和Alpha。然而,人们普遍认为Intel应该与x86构架保持兼容,可令人疑惑的是恰恰相反。进一步说,IA-64的设计与其他所有的指令集在主要实现方式上有所不同。特别的,每条指令它需要附加的6位。这打乱了传统的在指令字长和信息内容的平衡,并且它改变了编译器作者的原先的大纲。尽管IA-64是个全新的指令集,但Intel发表了一个令人困惑的声明:基于IA-64的芯片将与早期的x86芯片保持兼容。很难弄懂它所指的是什么。最新的称为Itaninu IA-64处理器显然需要特殊的兼容性的硬件,尽管如此,x86编码运行的相当慢。由于以上的复杂因素,IA-64的实现需要更大的体积相对与传统的指令集,这暗示着更大的消耗。因此,在任何情况下,作为常识和一般性的标准,Gordon Moore在访问剑桥最近开放的Betty and Gordon Moore 图书馆时所反复强调。在听到他说问题出现在Intel内部也许有所不同,我很不理解。但是我已经作好了准备,去接受这样的事实,我已经完全不了解半导体经济学了。AMD已经定义了一种64位的与x86更加兼容的指令集,并且他们已经取得了进展。这种片子并不是很大。很多人认为这才是Intel应该做的。(在这篇演讲稿被提交之前,Intel表示他们将销售一系列本质上与AMD兼容的芯片)更小晶体管的出现集成度还在不断增加,这是通过缩小原始晶体管以致可以更容易放在一个片子上。进一步说,物理学的定律占在了制造商的一方。晶体管变的更快,更简单,更小。因此,同时导致了更高的集成度和速度。这有个更明显的优势。芯片被放在
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
提示  人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
关于本文
本文标题:基于单片机的智能家居系统【自动化毕业论文开题报告外文翻译说明书】.zip
链接地址:https://www.renrendoc.com/p-22399320.html

官方联系方式

2:不支持迅雷下载,请使用浏览器下载   
3:不支持QQ浏览器下载,请用其他浏览器   
4:下载后的文档和图纸-无水印   
5:文档经过压缩,下载后原文更清晰   
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

网站客服QQ:2881952447     

copyright@ 2020-2025  renrendoc.com 人人文库版权所有   联系电话:400-852-1180

备案号:蜀ICP备2022000484号-2       经营许可证: 川B2-20220663       公网安备川公网安备: 51019002004831号

本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知人人文库网,我们立即给予删除!