科学的信息通信外文翻译.doc_第1页
科学的信息通信外文翻译.doc_第2页
科学的信息通信外文翻译.doc_第3页
科学的信息通信外文翻译.doc_第4页
科学的信息通信外文翻译.doc_第5页
已阅读5页,还剩5页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

毕业设计(论文)外文资料翻译题 目:科学的信息通信title:the science of info communications院 系:电气信息工程系专 业:通信工程姓 名:韩云伟学 号:20100606012【指导教师评语】评定成绩: 教师签章: 2012年03月21日科学的信息通信1、简介信息通信系统一词作为“信息系统”和“通信系统”的概念出现在国际电信联盟的文件里。虽然电信系统适用于不同类型的电子信号传输,一个信息通信系统是为信息传输的。这个概念包含了电信系统,而且还包括其运作所需的软件,其中存储了所有的传播信息。我们会把信息通信科学作为自然科学的一种学科。任何自然学科,如物理学,化学,研究结构(物理,化学等)的对象和这些对象之间的相互作用的过程。在信息通信科学中,研究的对象是数据和对象之间的相互作用过程是数据传输过程。因此,传播学的目标是形式化模型从一个对象的数据结构到另一个数据传输的施工。现在,我们将简要概述专门讨论这些问题的研究结果,以数据结构作为讨论的开始:在这方面,大多数研究把正式的数据模型建设作为重点。本研究动机是需要建立计算机化的数据库。传统上,数据库被分为关系型和层次型的,后者现在被称为面向对象。由于事实上,关系数据库是一组表格,表和行对应答尼古拉库兹涅佐夫教授,在俄罗斯科学院院士,是在信息和理论基础及其应用的著名专家,尤其是信息流程管理。他是信息传输问题研究所所长,并且是莫斯科物理技术研究所电讯主席。他的主要研究课题有在最优信息管理系统,随机微分方程,和不同步的系统。库兹涅佐夫教授是200多篇技术论文和专著的作者。尤其是对于特定对象和列对象的功能的论文。一方面面向对象的数据库是一个树,其中每一个节点对应一个对象。例如,该树的根可能对应到一个名为工厂的对象,而根以下的节点对应到人力分配表和产品范围生产。下面的树节点的人力分配表对应工厂结构;这个系列的产品以后将转变成各种类型的产品。应该指出的是,表格的形式很容易处理。实际上,它可以直接命令某些列值的表以方便的选择行或列的内容。但是,如果被描述的对象是复杂和不统一的,我们也可以创建数以百计或数以千计的表,这使得它们之间的联系难以管理。在这种情况下,面向对象的描述应该是首选的。近年来,致力于数据库的研究已经发展得非常迅速。尤其值得注意的是两个方向。首先,具有良好的结构和设计优秀的数据,这些数据主要面向数据库和网络技术的统一,并在很大程度上基于xml语言的整合研究。有效的xml数据库的发展是面向处理复杂结构的xml文件和报告所载的数据。第二,重大进展基于xml-db的用户界面的开发和对网格技术基础的数据库集成正在取得。经过对这一建筑形式模型的构造方法的粗略审查,我们必须提及对信息传播学的研究的定量特征,这实际上是类似于其他自然科学的定量参数(如克或米或物理瓦)。什么是信息的数量特征?我们的第一个观察是,当考虑计算机数据库和它们所包含的信息数量时,我们经常提到的在计算机内存中的二进制单位占用这个信息的数量,即引述比特兆字节或兆兆字节数。然而,这个参数也很难被看作是客观的信息的特点,因为它在很大程度上是谁创造的数据库程序员的技能而定。在一个更抽象的意义上这里有很多种途径去定义信息量。2、信息观念在信息处理广泛的范围中和对这些处理的多功能性已经引起了许多定义和对信息概念的诠释。我将列出以下四点,这是我们想法中最准确和最有效的。(一)信息是保留在可计算同构的实体。(二)在主题领域(其对象或现象)的信息是一个同态的结果(即保留了主要比例)本学科领域的某些实体由这些要素识别。如信号,特征或描述,元素的映射。(三)信息是一个有意义的对象或现象的描述。(四)在消息中包含的信息是一个实体,它在收到消息之后决定知识的变化。显然,第一个定义是一个足够正式的数学工具得到适当的实物模型的描述。第二个定义反映了一个正式的信号与帮助对象的信息特征正规化进程。第三个定义可以将传播在一些对象,现象或事件数据的过程联系在一起。最后,第四个定义强调的重点是所接受信息的新颖性。这样的多样性不仅是对信息质量概念定义的典型,而且也存在于信息量的定义。除了一些例外,在信息的数量事先定义的所有方法可分为五类:(一)熵,(二)算法,(三)组合,(四)语义(五)务实。前三种提供了一个对象或现象的复杂性定量定义的说明。第四类描述信息特征和将最新颖的信息传送给接受人。最后,第五类许多有用的信息将被传送给收信人。我们会考虑所有的可能。2.1熵历史上,熵的方法是第一个来自生活的。早在19世纪,物理学家介绍了熵的概念来定义的值,特征为机械能的热能量转换的过程。在一定程度上,这个值可以被看作是刻画分子运动(不确定性)的措施。可能由于这个原因,克劳德香农把所谓的一个信息来源的排放量叫做熵。为了更准确或不确定性,这是一个实值函数依赖于事件的概率,并符合下列条件:(一)与该事件发生的概率具有明显的不确定性。(二)如果一个事件比另一个事件发生的可能性小,那么第一个事件的不确定性比其他小概率事件的不确定性更大。(三)两个事件同时发生的不确定性就是他们的不明朗因素的总和。现在的人们普遍认为用熵的方法来对这些资料的概念表征和其数量特征的引入是由 shannon创建。该理论由shannon允许研究人员解释各种零碎时创造的,但重要的是,他的前任是从另一个角度发现这些数据的。在这方面,由罗伯特哈特利的工作应该首先被提及。哈特利引进了一个概念,与香农熵有很大的关系,涉及到一个随机发生的事件等概率的结果。在香农1948年出版大量的作品之后,信息熵理论的方法有了很大的发展。但奥黛丽的基础性工作特别值得一提,因为它引入了转换,保持不变的衡量指标的动力系统的熵的概念。柯氏的研究和他的追随者表明,所有熵,包括香农熵是接近特定类别的动力系统熵。熵是用来表示在第二个随机进行的特定的随机对象的信息。2.2算法方法该方法在信息熵理论研究中使我们能够回答一个x有多少信息是包含在另一个y中的。另一方面演算方法考虑需要多少信息重现(描述)的问题。柯尔莫哥洛夫表明该任务可以被精确的制定,不仅对随机对象而且也对那些表示为0和1的序列对象。在这种情况下,允许递归函数的理论是一个复杂性的概念。这是使用柯尔洛夫作为该算法的方法来定义信息量发展的基础。这种方法是基于算法的理论和假定一个先验概率衡量一组信号的可用性。我们将进一步说明,最短的描述是这个字在给定方法复杂度来看的,它有一个描述算法方法之间的最佳方法(它比任何其他方法生产的说明,以在较短的固定值)。相对于这种优化方法的复杂性被称为柯尔莫哥洛夫复杂性。2.3组合方式 在算法的方法中,在一个0和1的序列中包含的信息数量是由所需的重现次数(序列)决定的,事实上是方案最小长度的测量。然而,在一个单词中把0和1的序列作为另一种测量信息量的方法是可能的。引导代数理论的信息组合方式由吉帕发展。该序列中的信息数量,取决于它的不对称程度。让我们设x是一个字母,并考虑这个字母的长度为n个单词。一组排列应用于这个文字。然后,转化为它本身排列数的对数就是所谓的o型信息。一个字的对称性越少,就包含越多的o型资讯。 2.4信息通信现在让我们考虑在信息通信科学中对象交互的模式。信息一词的起源是在拉丁文字资料或解释中,事实上它是作为一个假设的发送者或信息接收者对话的排序。我们将以口语讲话为例说明,信息传递过程。这是一个多元(矢量)的过程。第一部分是物理:要成功的完成资料传输过程,我们需要一个声音信号(如人力声带),一对声波振动的传播媒介和振荡(人耳)接收器的来源。第二个是一个信号组件:调幅和调频声波振荡器。第三部分是语法:参与者在谈话中至少需要有一个共同的语言。第四部分是语义:信息传达必须包含一个对象或现象是信息接收者不知道的描述。第五部分是务实的。 (参加者必须要传达和接受信息,即他们必须有一个可信的动机。)3、结论如果我们详细分析,我们可以得出这样的结论;信息通信形式化模型思想体系已经为不同性质的实体提供方便。这作为一个自然的科学分支是值得密切注意和进一步的解释的。应该指出的是,在这方面通过基础的研究,才有可能在信息处理专家(数学家,物理学家,化学家,生物学家,社会学家,语言学家和心理学家,自然和科学的代表人文)的共同努力下得到发展。相反,模型和方法的信息通信将作为研究自然和社会文化发展的基本对象,是从物质到社会群体水平和基本结构水平的催化剂。附件:外文原文the science of info communications1.introductionthe term info communication system has appeared in the documents of the international telecommunication union as a generalization of the notions information system and telecommunication system. while a telecommunication system is intended for the transfer of electric signals of diverse types, an info communication system is intended for information transmission.the concept incorporates the telecommunication system, but also includes the software required for its operation, and all information stored therein and transmitted.we will regard info communication science as a natural scientific discipline. any natural discipline, like physics or chemistry, studies the structure (physical, chemical etc.) of objects and the process of interaction between these objects. in the science of info communications, the objects of research are data and the process of interaction between the objects is the data transmission process. accordingly, the objective of info communication science is the construction of formalized models of data structure and data transmission from one object to another.we will now briefly outline the main results of the research devoted to these problems, beginning with a discussion of data structures:in this area, most studies have been focused on the construction of formal models of data. this research has been motivated by the need to create computerized databases.traditionally,databases are subdivided into relational and hierarchical ones, the latter being now referred to as object-oriented.as a matter of fact, a relational database is a set of tables in which tables and rows correspond prof.nikolaj a. kuznetsov, academician of the russian academy of sciences,is a renowned specialist in the theoretical foundations of information and its applications,especially to information process management.he is the director of the institute for information transmission problems, and holds the chair of telecommunications at the moscow physics and technology institute.his principal research themes are in optimal information management systems,stochastic differential equations, and desynchronized systems.prof. kuznetsov is the author of more than 200 technical papers and three monographs.to specific objects and columns to object features. an object-oriented database is a tree in which every node corresponds to an object.for instance, the root of the tree might correspond to the object named factory,while the nodes below the root correspond to manpower assignment table and range of products manufactured.the tree nodes below the manpower assignment table correspond to factory structures; the range of products will be subdivided into types of product,etc.it should be noted that the table form is easy to handle.for instance, it is straightforward to order a table by a value in some column and easily select parts of rows or columns.however,if the object to be described is complex and non-uniform,we may have to resort to create hundreds or thousands of tables, which makes the links between them unmanageable.in this situation the object- oriented description should be preferred. the advantages and disadvantages of object-oriented and relational databases have been discussed in a number of publications (see for example 1, 2).in recent years,research devoted to databases has been developing very rapidly.two directions are particularly noteworthy.first, there are many studies on the integration of well-structured and poorly structured data,which are targeted towards the unification of database and web technologies and are largely based on the xml language. the development of effective xml databases is oriented to handling complex-structured xml documents and data contained therein. second, significant progress is being made on the development of xml-db user interfaces and the integration of databases based on grid technologies.after this cursory review of methods for building formal models of structure-the objects of study of the science of info communication-we must mention a little about the quantitative characteristics of information,which in fact are similar to quantitative parameters in other natural sciences (like grams or meters or watts in physics).what are the quantitative characteristics of information?our first observation is that when considering computer databases and the quantity of information they contain,we often mention the number of binary cells in computer memory occupied by this information,i.e. quote the number of ytes,megabytes, or terabytes.however, this parameter can hardly be viewed as an objective characteristic of information since it largely depends on the skills of the programmer who created the database.in a more abstract sense there are several approaches to the definition of information quantity.2.the concept of informationthe broad interest in information processes and the versatility of these processes have given rise to a number of definitions and interpretations of the notion of information.i will list four of them,which are to my mind the most accurate and effective.(i) information is the entity retained in computable isomorphism.(ii) information on a subject domain (its objects or phenomena) is a result of a homomorphic (i.e.retaining the principal ratios) mapping of elements of this subject domain to certain entities identifiable from these elements, such as signals, characteristics, or descriptions.(iii) information is a meaningful description of an object or a phenomenon.(iv) information contained in a message is an entity that determines the change of knowledge upon the receipt of the message.obviously,the first definition is suitable for a sufficiently formal (mathematical) description of models of real objects obtained by a developed mathematical apparatus.the second definition reflects the process of formalization of information characteristics of an object with the help of formal signals. the third definition can be linked with the process of transmitting data in some object,phenomenon,or event. finally,the fourth definition emphasizes the novelty of the data for the recipient of the message. this great diversity is not only typical of the qualitative definition of the notion of information,but also exists in the definition of information quantity,too.with some exceptions, all prior approaches to the definition of information quantity may be divided into five kinds:(i) entropic,(ii)algorithmic,(iii) combinatorial,(iv) semantic and (v) pragmatic.the first three kinds provide a quantitative definition of the complexity of an object or phenomenon to be described.the fourth kind describes the informative character and novelty of a message to be transferred to the recipient of the message. finally, the fifth kind points to the usefulness of the message to be transferred to the recipient.we will consider each in mm.2.1. the entropic approachhistorically,the entropic approach was the first to come to life.as far back as 19th century, physicists introduced the notion of entropy to define the value that characterizes the processes of conversion of thermal energy into mechanical energy.to a certain extent, this value could be viewed as characterizing the measure of chaoticity (uncertainty) of molecular motion.claude shannon 3,probably for this reason,called the quantity of information emitted by a source its entropy. to be more precise, the entropy, or uncertainty, is a real-valued function that depends on the probability of events and satisfies the following conditions:(i) the event that occurs with the probability of 1 has zero uncertainty.(ii) if one event has a lesser probability than that of another event then the uncertainty of the first event is greater than the uncertainty of the other event.(iii) the uncertainty of a simultaneous occurrence of two events is equal to the sum of their uncertainties.it is generally assumed now that the entropic approach to the characterization of the notion of information as such and to the introduction of its quantitative characteristics was created by shannon. the theory developed by shannon allowed the researchers to interpret the various piecemeal, but nonetheless important, data found by his predecessors from an integral point of view. in this connection the work by robert hartley should first of all be mentioned. hartley introduced a notion that was a particular but important case of shannons entropy, related to an equiprobable outcome of randomly occurring events. after shannons paper was published in 1948, a great number of works followed that developed the entropic approach to information theory. the fundamental work of audrey kolmogorov 4 deserves special mention as it introduced the concept of the entropy of a dynamical system as a metric invariant of conversions that preserve the measure. studies by kolmogorov and his followers showed that all entropies, including shannons entropy, are close to entropies of particular classes of dynamical systems. entropy is used to express the quantity of information in a given random object and information on a second random object carried by the first.2.2. the algorithmic approachthe entropic approach pursued in information theory enables us to answer the question of how much information on an object x is contained in the object yi on the other hand the algorithmic approach considers the question of how much information is needed to reproduce (describe) the object x. kolmogorov showed that this task could be formulated precisely, not only for stochastic objects but also for the objects that are presented as sequences of zeros and ones 6. in this case, the theory of recursive functions permits a rigorous introduction of the notion of complexity of an object. this was used by kolmogorov as the basis for the development of the algorithmic approach to the definition of information quantity.this approach is based on the theory of algorithms and presumes the availability of an a priori probability measure on a set of signals. we will further state that the shortest length of description is the complexity of this word for the given method f. it appears that there is an optimal method of description among the algorithmic methods (which produces shorter descriptions than any other methods to within a constant value). the complexity with respect to this optimal method is called kolmogorov-complexity r(w).2.3. the combinatorial approachin the algorithmic approach, the quantity of information contained in a sequence of zeros and ones is in fact measured by the minimal length of the program needed to reproduce this word (sequence).however,another measurement of information quantity contained in a word as a sequence of zeros and ones is possible.the combinatorial approach developed by v. goppa 7, leads to an algebraic theory of information.the quantity of information in a sequence is determined by the extent of its asymmetry. let x be an alphabet and consider words of the length n in this alphabet.a group of permutations is applied to words.then the logarithm of the number of permutations that translate a word into itself is called its o-information.the fewer symmetries a word has, the more o-information it contains.2.4 infocommunicationslet us now consider the models of object interaction in the science of info communications.the origin of the term information is the latin word information or explanation,which as a matter of fact presumes a sort of dialog between the senders and the receivers of information.we will illustrate the process of information transmission with the example of oral speech

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论