济南大学计算智能实验室陈月辉_15499.ppt_第1页
济南大学计算智能实验室陈月辉_15499.ppt_第2页
济南大学计算智能实验室陈月辉_15499.ppt_第3页
济南大学计算智能实验室陈月辉_15499.ppt_第4页
济南大学计算智能实验室陈月辉_15499.ppt_第5页
已阅读5页,还剩135页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1 分布估计算法 济南大学计算智能实验室陈月辉yhchen 2020 4 12 2 思想 遗传算法中的交叉 变异等操作有可能破坏已经优化好的个体 为了避免这种现象 一种新的演化算法 分布估计算法应运而生 分布估计算法中没有交叉和变异 主要用到是好的个体的一种概率模型 以及根据此模型抽样产生下一代 3 gatoeda 4 基于种群的增强式学习 populationbasedincrementallearning pbil baluja 1994 populationsbasedsearch suchasgacreateaprobabilityvectorbycountingthenumberof1sand0sineachgenepositiongeneratenewpopulationusingtheprobabilityvectornoinformationiscarriedfromgenerationtogeneration supervisedcompetitivelearning e g lvqwinner take allreinforcementlearninginannwinnerisakindofprototypeofthesamplepresentedpbil ga clcapturethetrendfromthebestperformer 5 basicpbil p initializeprobabilityvector eachposition 0 5 while generations limit foreachvectoridoforeachpositionjdogeneratevi j accordingtop j end doevaluatef vi end dovmax max f vi updatepaccordingtovmaxifrandom 0 1 pmutatemutatepend ifend while 6 details populationreplacedbyprobabilityvectorp p1 p2 p pi probabilityof1intheithbitgeneratenindividualsupdatepusingthebestindividualpi t 1 xi pi t 1 i 1 2 mutatep pi t 1 mu 0 1 pi t 1 1 m 7 pbilexample t 0 p 0 5 0 5 0 5 0 5 generate5individuals 1010 1100 0100 0111 0001 fitness 2 2 1 3 1 bestindividual 0111 0 1updatepp1 0 5 1 0 1 0 45p2 p3 p4 0 1 1 0 5 1 0 1 0 55 8 someapplications functionoptimizationjob shopschedulingtspbin packingknapsackproblemneuralnetworkweighttraining 9 分布估计算法 总的框架 estimationofdistributionalgorithmsdojustthat typicallytheyoperateasfollows step0 randomlygenerateasetof individuals t 0 step1 evaluatethe individualswhile notdone step2 select individuals where tobeparentsdevelopaprobabilitydistribution densityfunction pt basedontheparentsstep3 create offspringusingptstep4 evaluatetheoffspringstep5 the offspringreplacethe parents t t 1 step6 gotowhile 10 flowchart 11 whatmodelstouse startwithprobabilityvectorforbinarystringsgaussiandistributionlaterdependencytreemodels comit bayesiannetwork 12 probabilityvectorpmbgas 13 分布估计算法 概率向量 theedaisknownastheunivariatemarginaldistributionalgorithmlet strytosolvethefollowingproblemf x x2 where 2 0 x 2 0 letl 7 thereforeourmappingfunctionwillbed 2 2 7 c 4 decode c 127 2 14 eda 概率向量 randomlygenerateaninitialpopulationgenotypephenotypefitnessperson1 10010100 331fit person2 0100101 0 835fit person3 11010101 339fit person4 0110110 0 300fit person5 10011110 488fit person6 0001101 1 591fit 15 eda 概率向量 evaluatepopulationatt 0genotypephenotypefitnessperson1 10010100 331fit 0 109person2 0100101 0 835fit 0 697person3 11010101 339fit 1 790person4 0110110 0 300fit 0 090person5 10011110 488fit 0 238person6 0001101 1 591fit 2 531 16 eda 概率向量 selectthebest3 truncationselection individualstobeparents genotypephenotypefitnessperson1 10010100 331fit 0 109person2 0100101 0 835fit 0 697person3 11010101 339fit 1 790person4 0110110 0 300fit 0 090person5 10011110 488fit 0 238person6 0001101 1 591fit 2 531 17 eda 概率向量 constructajointprobabilitydistributionfunction p0 giventhethreeparents genotypephenotypefitnessperson2 0100101 0 835fit 0 697person3 11010101 339fit 1 790person6 0001101 1 591fit 2 531sinceourgenotypeonlyhastwovalues weonlyneedtobeconcernedwiththeprobabilitythevalueofageneis1 theprobabilitythatavalueofageneis0is1minustheprobabilitythatitisa1 18 eda 概率向量 constructajointprobabilitydistributionfunction p0 giventhethreeparents genotypephenotypefitnessperson2 0100101 0 835fit 0 697person3 11010101 339fit 1 790person6 0001101 1 591fit 2 531thus p0 p g0 1 parent0 0 333 p g1 1 parent0 0 667p g2 1 parent0 0 000 p g3 1 parent0 0 667p g4 1 parent0 0 667 p g5 1 parent0 0 333p g2 1 parent0 0 667 19 总结 0 5 0 5 1 0 1 0 0 5 0 0 1 1 1 1 1 1 0 1 1 0 1 0 1 0 1 1 1 1 20 分布估计算法 高斯分布 evaluatetheoffspringgenotypephenotypefitnesschild1 0001010 1 685fit 2 839child2 11011011 433fit 2 054child3 0101101 0 583fit 0 340child4 0001011 1 654fit 2 736child5 11001101 213fit 1 470child6 0100101 0 835fit 0 697 21 eda 高斯分布 replacetheparentswiththeoffspringgenotypephenotypefitnessperson2 0100101 0 835fit 0 697person3 11010101 339fit 1 790person6 0001101 1 591fit 2 531genotypephenotypefitnesschild1 0001010 1 685fit 2 839child2 11011011 433fit 2 054child3 0101101 0 583fit 0 340child4 0001011 1 654fit 2 736child5 11001101 213fit 1 470child6 0100101 0 835fit 0 697 22 eda 高斯分布 selectthebest3forparent1genotypephenotypefitnessperson1 0001010 1 685fit 2 839person2 11011011 433fit 2 054person3 0101101 0 583fit 0 340person4 0001011 1 654fit 2 736person5 11001101 213fit 1 470person6 0100101 0 835fit 0 697 23 高斯分布 estimationofdistributionalgorithms anexamplerun byhand randomlygenerateaninitialpopulationphenotypefitnessperson1 0 331fit person2 0 835fit person3 1 339fit person4 0 300fit person5 0 488fit person6 1 591fit 24 高斯分布 evaluateinitialpopulationphenotypefitnessperson1 0 331fit 0 110person2 0 835fit 0 697person3 1 339fit 1 793person4 0 300fit 0 900person5 0 488fit 0 238person6 1 591fit 2 531 25 高斯分布 selectbest3of6phenotypefitnessperson1 0 331fit 0 110person2 0 835fit 0 697person3 1 339fit 1 793person4 0 300fit 0 900person5 0 488fit 0 238person6 1 591fit 2 531 26 高斯分布 calculatejointdensityfunction gaussian phenotypefitnessperson3 1 339fit 1 793person4 0 300fit 0 900person6 1 591fit 2 531xavg 0 184 1 199 27 高斯分布 createnewpopulationof6phenotypefitnesschild1 1 015fit 1 030child2 1 383fit 1 913child3 0 784fit 0 615child4 0 416fit 0 173child5 0 116fit 0 013child6 1 284fit 1 649 28 高斯分布 replaceparentswithchildren selectbest3of6phenotypefitnessperson1 1 015fit 1 030person2 1 383fit 1 913person3 0 784fit 0 615person4 0 416fit 0 173person5 0 116fit 0 013person6 1 284fit 1 649 29 高斯分布 calculatenewjointdensityfunctionphenotypefitnessperson1 1 015fit 1 030person2 1 383fit 1 913person6 1 284fit 1 649xavg 0 551 1 11 30 what snext usetreemodel comit clusterbitsintogroups extendedcompactga usebayesiannetwork boa 31 beyondsinglebits comit 32 howtolearnatreemodel 33 prim salgorithms startwithagraphwithnoedgesaddarbitrarynodetothetreeiterate hanganewnodetothecurrenttree preferadditionofedgeswithlargemutualinformation greedyapproach complexity o n2 34 variantsofpmbgaswithtreemodels 35 beyondpairwisedependencies ecga 36 learningthemodelinecga 37 howtocomputemodelquality 38 最小描述长度 mdl 基本概念 最小描述长度准则解释一组数据的最好理论 应该使得下面两项之和最小 描述理论所需要的比特长度 在理论的协助下 对数据编码所需要的比特长度 最小描述长度也称为给定数据的随机复杂性 ecga中的最小描述长度准则我们应该寻求这样一种合理且较小的结构 使得训练样本的大多数数据符合这个结构 把样本中不符合的数据作为例外编码 使得下面两项最小 编码群组结构所需的比特 它代表了猜想 编码例外实例所需要的比特 39 samplingmodelinecga samplegroupsofbitsatatimebasedonobservedprobabilities proportionsbutcanalsoapplypopulationbasedcrossoversimilartouniformbutw r t model 40 what snext wesawprobabilityvector noedges treemodels someedges marginalproductmodels groupsofvariables nextbayesiannetworks canrepresentallaboveandmore 41 jointprobabilitydistribution jpd solutionasasetofrandomvariablesjointprobabilitydistribution jpd exponentialtothenumberofvariables thereforenotfeasibletocalculateinmostcasesneedssimplification 42 factorisationofjpd univariatemodel nointeraction simplestmodelbivariatemodel pair wiseinteractionmultivariatemodel interactionofmorethantwovariables 43 typicalestimationandsamplingofjpdinedas learntheinteractionbetweenvariablesinthesolutionlearntheprobabilitiesassociatedwithinteractingvariablesthisspecifiesthejpd p x samplethejpd i e learnedprobabilities 44 probabilisticgraphicalmodels efficienttooltorepresentthefactorisationofjpdmarriagebetweenprobabilitytheoryandgraphtheoryconsistoftwocomponentsstructureparameterstwotypesofpgmdirectedpgm bayesiannetworks undirectedpgm markovrandomfield 45 directedpgm bayesiannetworks structure directedacyclicgraph dag independencerelationship avariableisconditionallyindependentofrestofthevariablesgivenitsparentsparameters conditionalprobabilities 46 bayesiannetworks thefactorisationofjpdencodedintermsofconditionalprobabilitiesisjpdforbn 47 estimatingabayesiannetwork estimatestructureestimateparametersthiscompletelyspecifiesthejpdjpdcanthenbesampled 48 bnbasededas initialiseparentsolutionsselectasetfromparentsolutionsestimateabnfromselectedsetestimatestructureestimateparameterssamplebntogeneratenewpopulationreplaceparentswithnewsetandgoto2untilterminationcriteriasatisfies 49 howtoestimateandsamplebninedas estimatingstructurescore searchtechniquesconditionalindependencetestestimatingparameterstrivialinedas datasetiscompleteestimateprobabilitiesofparentsbeforechildsamplingprobabilisticlogicalsampling sampleparentsbeforechild 50 bnbasededas wellestablishedapproachinedasboa ebna lfda mimic comit bmdareferenceslarra iagaandlozano2002pelikan2002 51 贝叶斯网络 全联合概率计算复杂性十分巨大朴素贝叶斯太过简单现实需要一种自然 有效的方式来捕捉和推理 不确定性知识变量之间的独立性和条件独立性可大大减少为了定义全联合概率分布所需的概率数目 52 贝叶斯网络的定义 是一个有向无环图 dag 随机变量集组成网络节点 变量可离散或连续一个连接节点对的有向边或箭头集合每节点xi都有一个条件概率分布表 p xi parents xi 量化其父节点对该节点的影响 53 贝叶斯网络的别名 信念网 beliefnetwork 概率网络 probabilitynetwork 因果网络 causalnetwork 知识图 knowledgemap 图模型 graphicalmodel 或概率图模型 pgm 决策网络 decisionnetwork 影响图 influencediagram 54 独立和条件独立 weather和其它3个变量相互独立给定cavity后 toothache和catch条件独立 weather cavity catch toothache 55 independency 56 jointprobabilitydistribution jointprobabilitydistributionof x y z x y z 57 marginaldistribution p x p y p z 58 second orderjointprobabilityfunctions 59 conditionalprobabilityfunctions 60 conditionalprobabilityfunctions 61 conditionalprobabilityfunctions 62 贝叶斯网络的语义 贝叶斯网络的两种含义对联合概率分布的表示 构造网络对条件依赖性语句集合的编码 设计推理过程贝叶斯网络的语义p x1 xn p x1 parent x1 p xn parent xn 63 贝叶斯网络示例 burglary earthquake marycalls johncalls alarm 64 贝叶斯网络的语义公式计算示例 试计算 报警器响了 但既没有盗贼闯入 也没有发生地震 同时john和mary都给你打电话的概率 解 p j m a b e p j a p m a p a b e p b p e 0 9 0 7 0 001 0 999 0 998 0 00062 0 062 65 learningbayesnetsfromdata whatisabayesiannetwork directedacyclicgraphnodesarevariables discreteorcontinuous arcsindicatedependencebetweenvariables conditionalprobabilities localdistributions missingarcsimpliesconditionalindependenceindependencies localdistributions modularspecificationofajointdistribution 66 whybayesiannetworks expressivelanguagefinitemixturemodels factoranalysis hmm kalmanfilter intuitivelanguagecanutilizecausalknowledgeinconstructingmodelsdomainexpertscomfortablebuildinganetworkgeneralpurpose inference algorithmsp badbattery hasgas won tstart exact modularspecificationleadstolargecomputationalefficienciesapproximate loopy beliefpropagation 67 overview learningprobabilities localdistributions introductiontobayesianstatistics learningaprobabilitylearningprobabilitiesinabayesnetlearningbayes netstructurebayesianmodelselection averagingapplications 68 极大似然估计 极大似然思想有两个射手 一人的命中率为0 9 另一人的命中率为0 1 现在他们中的一个向目标射击了一发 结果命中了 估计是谁射击的 一般说 事件a发生的概率与参数 有关 取值不同 则p a 也不同 因而应记事件a发生的概率为p a 若a发生了 则认为此时的 值应是在 中使p a 达到最大的那一个 这就是极大似然思想 69 极大似然估计 似然函数与极大似然估计 为该总体的似然函数 定义 若有 使得 则称为 的极大似然估计 记为 70 求极大似然估计的步骤 1 做似然函数 2 做对数似然函数 3 列似然方程 令 若该方程有解 则其解就是 71 例子1 设x1 xn为取自参数为 的泊松分布总体的样本 求 的极大似然估计 令 72 例子2 设x1 xn为取自总体的样本 求参数的极大似然估计 令 73 例子2 为的极大似然估计 74 估计量的评价标准 估计量的评价标准 无偏性 有效性 一致性无偏性 e 有效性 d 小 更有效一致性 样本数趋于无穷时 依概率趋于 75 贝叶斯估计 最大后验概率 用一组样本集k x1 x2 xn 估计未知参数 未知参数 视为随机变量 先验分布为p 而在已知样本集k出现的条件下的后验概率为p k 最大后验概率估计 maximumaposteriori map 76 贝叶斯 最小风险 估计 参数估计的条件风险 给定x条件下 估计量的期望损失 参数估计的风险 估计量的条件风险的期望 贝叶斯估计 使风险最小的估计 77 贝叶斯 最小风险 估计 损失函数定义为误差平方 定理 如果定义损失函数为误差平方函数 则有 78 贝叶斯估计的步骤 确定 的先验分布p 由样本集k x1 x2 xn 求出样本联合分布 p k 计算 的后验分布计算贝叶斯估计 79 一元正态分布例解 总体分布密度为 均值 未知 的先验分布为 用贝叶斯估计方法求 的估计量 样本集 k x1 x2 xn 80 计算 的后验分布 计算 的贝叶斯估计 81 learningprobabilities classicalapproach simplecase flippingathumbtack 图钉 trueprobabilityqisunknown giveniiddata estimatequsinganestimatorwithgoodproperties lowbias lowvariance consistent e g mlestimate iid independentandidenticallydistributed i i d 82 learningprobabilities bayesianapproach trueprobabilityqisunknownbayesianprobabilitydensityforqweobservedmiidtossresults d x 1 x 2 x m wewanttoestimate 83 bayesianapproach usebayes ruletocomputeanewdensityforqgivendata prior likelihood posterior 84 thelikelihood binomialdistribution 85 example applicationofbayesruletotheobservationofasingle heads 86 theprobabilityofheadsonthenexttoss howgoodisaparticularlikelihoodindicateshowlikelytheobserveddataisgenerated ml maximumlikelihood 87 overview learningprobabilitiesintroductiontobayesianstatistics learningaprobabilitylearningprobabilitiesinabayesnetlearningbayes netstructurebayesianmodelselection averagingapplications 88 fromthumbtackstobayesnets thumbtackproblemcanbeviewedaslearningtheprobabilityforaverysimplebn x heads tails 89 thenextsimplestbayesnet 90 thenextsimplestbayesnet 91 thenextsimplestbayesnet parameterindependence 92 thenextsimplestbayesnet 93 abitmoredifficult threeprobabilitiestolearn qx headsqy heads x headsqy heads x tails 94 abitmoredifficult 95 abitmoredifficult 96 abitmoredifficult qx x1 x2 qy x heads y1 y2 case1 case2 qy x tails heads tails 3separatethumbtack likeproblems 97 ingeneral learningprobabilitiesinabnisstraightforwardiflikelihoodsfromtheexponentialfamily multinomial poisson gamma parameterindependenceconjugatepriorscompletedata 98 incompletedatamakesparametersdependent qx x1 qy x heads y1 qy x tails 99 incompletedata incompletedatamakesparametersdependentparameterlearningforincompletedatamonte carlointegrationinvestigatepropertiesoftheposteriorandperformpredictionlarge sampleapprox laplace gaussianapprox expectation maximization em algorithmandinferencetocomputemeanandvariance variationalmethods 100 overview learningprobabilitiesintroductiontobayesianstatistics learningaprobabilitylearningprobabilitiesinabayesnetlearningbayes netstructurebayesianmodelselection averagingapplications 101 difficulties 如果网络结构未知 则构造贝叶斯模型是非常困难的事情 因为如果属性的数目是n 那么可能的结构数目至少是n的指数阶 在这样巨大的搜索空间寻找出合理的网络结构是十分耗时的 必须通过一些评价标准从中进行挑选 常用的评价标准有mdl minimumdescriptionlength bic bayesianinformationcriteria 以及be bayesianlikelihoodequivalent 等 mdl和bic的基础都是信息论 其核心都是要构建较短的编码长度 而be尺度适用于参数符合特定分布的情况 其中最著名的有bde bayesiandirichletlikelihoodequivalent 和bge bayesiangaussianlikelihoodequivalent 常用的搜索算法有贪心搜索 greedysearch 模拟退火 simulatedannealing 最好优先搜索 best firstsearch 等 heckerman等人在1995年指出 若从搜索结果和计算开销综合考虑 贪心搜索是最好的策略 由于贪心搜索通常只能得到局部最优结果 一种解决方案是赋予多个随机初始值 102 twotypesofmethodsforlearningbns constraintbasedfindsabayesiannetworkstructurewhoseimpliedindependenceconstraints match thosefoundinthedata scoringmethods bayesian mdl mml findthebayesiannetworkstructurethatcanrepresentdistributionsthat match thedata i e couldhavegeneratedthedata 103 learningbayes netstructure givendata whichmodeliscorrect x y model1 x y model2 104 bayesianapproach givendata whichmodeliscorrect morelikely x y model1 x y model2 datad 105 bayesianapproach modelaveraging givendata whichmodeliscorrect morelikely x y model1 x y model2 datad averagepredictions 106 bayesianapproach modelselection givendata whichmodeliscorrect morelikely x y model1 x y model2 datad keepthebestmodel explanation understanding tractability 107 toscoreamodel usebayesrule givendatad marginallikelihood modelscore likelihood 108 occam srazor 109 occam srazor 110 computationofmarginallikelihood efficientclosedformiflikelihoodsfromtheexponentialfamily binomial poisson gamma parameterindependenceconjugatepriorsnomissingdata includingnohiddenvariableselseuseapproximationsmonte carlointegrationlarge sampleapproximationsvariationalmethods 111 practicalconsiderations thenumberofpossiblebnstructuresissuperexponentialinthenumberofvariables howdowefindthebestgraph s 112 modelsearch findingthebnstructurewiththehighestscoreamongthosestructureswithatmostkparentsisnphardfork 1 chickering 1995 heuristicmethodsgreedygreedywithrestartsmcmcmethods 113 learningthecorrectmodel truegraphgandpisthegenerativedistributionmarkovassumption psatisfiestheindependenciesimpliedbygfaithfulnessassumption psatisfiesonlytheindependenciesimpliedbygtheorem undermarkovandfaithfulness withenoughdatageneratedfromponecanrecoverg uptoequivalence evenwiththegreedymethod 114 learningbayesnetsfromdata bayesnet s data bayes netlearner prior expertinformation 115 k2algorithms k2isanalgorithmforconstructingabayesnetworkfromadatabaseofrecords abayesianmethodfortheinductionofprobabilisticnetworksfromdata gregoryf cooperandedwardherskovits machinelearning9 1992k2算法的基本思想是 先定义评价网络结构模型优劣的测度函数 再从一个空网络开始 根据事先确定的节点次序 选择使后验结构概率最大的节点作为该节点的父节点 依次遍历完所有的节点 逐步为每个变量添加最佳父节点 116 basicmodel theproblem tofindthemostprobablebayes networkstructuregivenadatabased adatabaseofcasesz thesetofvariablesrepresentedbydbsi bsj twobayesnetworkstructurescontainingexactlythosevariablesthatareinz 117 basicmodel bycomputingsuchratiosforpairsofbayesnetworkstructures wecanrankorderasetofstructuresbytheirposteriorprobabilities basedonfourassumptions thepaperintroducesanefficientformulaforcomputingp bs d letbrepresentanarbitrarybayesnetworkstructurecontainingjustthevariablesind 118 computingp bs d assumption4thedensityfunctionf bp bs isuniform bpisavectorwhosevaluesdenotestheconditional probabilityassignmentassociatedwithstructurebs assumption2casesoccurindependently givenabayesnetworkmodel assumption3therearenocasesthathavevariableswithmissingvalues assumption1thedatabasevariables whichwedenoteasz arediscrete 119 computingp bs d d dataset ithasmcases records z asetofndiscretevariables x1 xn ri avariablexiinzhasripossiblevalueassignment bs abayesnetworkstructurecontainingjustthevariablesinz i eachvariablexiinbshasasetofparentswhichwerepresentwithalistofvariables iqi therearehasuniqueinstantiationsof iwij denotejthuniqueinstantiationof irelativetod nijk thenumberofcasesindinwhichvariablexihasthevalueofand iisinstantiatedaswij nij where 120 decreasethecomputationalcomplexity threemoreassumptionstodecreasethecomputationalcomplexitytopolynomial time thereisanorderingonthenodessuchthatifxiprecedesxj thenwedonotallowstructuresinwhichthereisanarcfromxjtoxi thereexistsasufficientlytightlimitonthenumberofparentsofanynodesp i xi andp j xj areindependentwheni j 121 k2algorithm aheuristicsearchmethod usethefollowingfunctions wherethenijkarerelativeto ibeingtheparentsofxiandrelativetoadatabased pred xi x1 xi 1 itreturnsthesetofnodesthatprecedexiinthenodeordering 122 k2algorithm aheuristicsearchmethod input asetofnodes anorderingonthenodes anupperbounduonthenumberofparentsanodemayhave andadatabasedcontainingmcases output foreachnodes aprintoutoftheparentsofthenode 123 k2algorithm aheuristicsearchm

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论