会员注册 | 登录 | 微信快捷登录 支付宝快捷登录 QQ登录 微博登录 | 帮助中心 人人文库renrendoc.com美如初恋!
站内搜索 百度文库

热门搜索: 直缝焊接机 矿井提升机 循环球式转向器图纸 机器人手爪发展史 管道机器人dwg 动平衡试验台设计

   首页 人人文库网 > 资源分类 > DOC文档下载

外文翻译--使用ASP.NET方法减少网络爬虫故障.doc

  • 资源星级:
  • 资源大小:47.00KB   全文页数:8页
  • 资源格式: DOC        下载权限:注册会员/VIP会员
您还没有登陆,请先登录。登陆后即可下载此文档。
  合作网站登录: 微信快捷登录 支付宝快捷登录   QQ登录   微博登录
友情提示
2:本站资源不支持迅雷下载,请使用浏览器直接下载(不支持QQ浏览器)
3:本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰   

外文翻译--使用ASP.NET方法减少网络爬虫故障.doc

英文原文AnApproachtoReduceWebCrawlerTrafficUsingAsp.NetNowdayssearchenginetransfersthewebdatafromoneplacetoanother.Theyworkonclientserverarchitecturewherethecentralservermanagesalltheinformation.Awebcrawlerisaprogramthatextractstheinformationoverthewebandsendsittothesearchengineforfurtherprocessing.Itisfoundthatmaximumtrafficapproximately40.1isduetothewebcrawler.TheproposedschemeshowshowwebcrawlercanreducethetrafficusingDynamicwebpageandHTTPGETrequestusingasp.net.I.INTRODUCTIONAllthesearchengineshavepowerfulcrawlersthatvisittheinternettimetotimeforextractingtheusefulinformationovertheinternet.Theretrievedpagesareindexedandstoredinthedatabaseasshowninfigure1.ActuallyInternetisadirectedgraph,orwebpageasanodeandhyperlinkasedge,sothesearchoperationcouldbeabstractedasaprocessoftraversingdirectedstructuregraph.Byfollowingthelinkedstructureoftheweb,wecantraverseanumberofnewpagesstartedfromstartingwebpages.Webcrawlersaredesignedtoretrievewebpagesandaddthemtheirrepresenttothelocalrepository/databases.Crawlerupdatestheirinformationonceaweek,sometimesitupdatemonthlyorquarterlyalso.Theycannotprovideuptodateversionoffrequentlyupdatedpages.Tocatchupfrequentupdateswithoutputtingalargeburdenoncontentprovider,webelieveretrievingandprocessingdatanearthedatasourceisinevitable.Currentlymorethanonesearchenginesareavailableinthemarket.Thatincreaseincomplexityofwebtraffichasrequiredthatwebaseourmodelonthenotationofwebrequestratherthanthewebpages.Webcrawleraresoftwaresystemsthatusethetextandlinksonwebpagestocreatesearchindexesofthepages,usingHTMLlinkstofolloworcrawltheconnectionsbetweenpages.Figure1,Architectureofawebsearchengine.TheWWWisawebofhyperlinkedrepositoryoftrillionsofhypertextdocuments9layingondifferentwebsites.WorldWideWebWebtrafficcontinuestoincreaseandisnowestimatedtobemorethan70percentofthetotaltrafficontheInternet.A.BasicCrawlingTerminologyWeneedtoknowsomebasicterminologyofwebcrawlerwhichplaysanimportantroleinimplementationofthewebcrawler.SeedpageCrawlingmeanstotraversethewebrecursivelybypickedupthestartingURLfromthesetofURL.StartingURLisentrypointfromwhereallthecrawlersstarttheirsearchingprocedure.ThissetofURLknownasseedpage.FrontierThecrawlingprocedurestartswithagivenURL,ExtractingthelinkfromitandaddingthemtoanunvisitedlistofURL.thisunvisitedlistknownasfrontier.Thefrontierimplementedbyaqueue.ParserParsingmayimplysimplehyperlinked/URLextractionoritmayinvolvethemorecomplexprocessoftidyinguptheHTMLcontentinordertoanalyzetheHTMLtagtree.ThejobanyparseristoparsethefetchedpagestoextractthelistofnewURLfromitandreturnthenewunvisitedURLtothefrontier.TheBasicalgorithmofawebcrawlerisgivenbelowStartReadtheURLfromtheseedURLCheckwhetherthedocumentsalreadydownloadedornotIfdocumentsarealreadydownload.Break.ElseAddittothefrontier.NowpicktheURLfromthatfrontierandextractthenewlinkfromitAddallthenewlyfoundURLintothefrontier.Continue.EndThemainfunctionofacrawleristoaddnewlinksintothefrontieraddtoselectanew.II.RELATEDWORKToreducethewebcrawlertrafficmanyresearchershascompletedtheirresearchinfollowingareasInthisauthoruseddynamicwebpageswithHTTPGetrequestwithlastvisitparameter.Oneapproachistheuseofactivenetworktoreduceunnecessarycrawlertraffic.Theauthorproposedanapproachwhichusesthebandwidthcontrolsysteminordertoreducethewebcrawlertrafficovertheinternet.Oneistoplacethemobilecrawleratwebserver.Crawlercheckupdatesinwebsiteandsendthemtothesearchengineforindexing.DesignanewwebcrawlerusingVB.NETtechnology.III.PERFORMANCEMATRICESIntheimplementationofwebcrawlerwehavetakensomeassumptionsintotheaccountjustforsimplifyingalgorithmandimplementationandresults.RemoveaURLfromtheURLlistDeterminetheprotocolofunderlyinghostlikehttp,ftpetc.Downloadthecorrespondingdocument.Extractanylinkscontainedinit.AddtheselinksbacktotheURLlist.IV.SIMULATORThesimulatorhasbeendesignedtostudythebehaviorpatternofdifferentcrawlingalgorithmsfromthesamesetofURLs.WedesignedacrawlerusingVB.NETandASP.NETwindowapplicationprojecttypeourcrawlercanworkongloballyandlocally,meansitcangiveresultonintranetandinternet.ItuseURLinaformatlikehttp//www.yahoo.comandsetalocationornameforsavingcrawlingresultsdatainMSAccessdatabase.Figure2,SnapshotofWebCrawler.SnapshotfortheuserinterfaceofWebCrawlerisrunningoneitherintranetorinternet.Fortakingaresultofcrawlerweuseawebsite.Ateachsimulationstep,theschedulerchoosesthetopmostwebsitefromthequeueofthewebsitesandsendsthissiteinformationtoamodulethatwillsimulatedownloadingpagesfromthewebsites.ForthissimulatorweusecrawlingpoliciesandsavethedatacollectedordownloadintheMSAccessdatabasetablewithsomedatafield.CrawlingResult,TheCrawlingresultispresentintheformoftabledepictingtheresultintheformofrowandcolumnstheoutput,oftheCrawlerisshownasasnapshot.Figure3,SnapshotoftheCrawledResultDatabase.InthisproposedworkIanalyzedthatwhenwecrawledthewebsiteitdownloadedallthepagesofwebsite.SecondtimewhenIcrawledthesamesiteIfoundthatcrawlercrawledallthepagesagainwhilesiteupdatedonlyitsdynamicpagesandrarelyitsstaticpages.Forreducingthecrawlertrafficweproposetheuseofdynamicwebpagetoinformthewebcrawleraboutthenewpagesandupdatesonwebsite.Inexperimentweusewebsiteof7webpages.WebsitedeployedonASP.NETusingCLanguage.DynamicwebpageiscodedinClanguage.WebcrawleriscodedinVB.NET.LAST_VISITparameterpassedismillisecondtimeofsystem,returnbyC,millisecondtimeismaintainedbyupdatedatastructure.Firstweperformcrawlingonwebsiteusingoldapproach.Thenweperformcrawlingusingproposedapproach.Whenweperformthewebcrawlingonwebsite.TheresultsobtainedshowninTable1.Totesttheproposedapproachwedirectthewebcrawlertodynamicwebpagedynamic.aspxandsetthelastvisittimeatURLandperformcrawling.Test1UpdatetimeandURLofpagesindex,branchandpersoninUpdatedatastructureatwebcrawlersettheLAST_VISITtimebeforetimeofpagesintheUpdate.Performedcrawling,resultsobtainedareshownintable2.Test2UpdatetimeandURLofpageaboutinUpdatedatastructure.AtwebcrawlersetstheLAST_VISITtime,beforethetimeofpagesintheupdate.Performedcrawling,resultsobtainedareshownintable3.Test3UpdatetimeandURLofpagesserviceandqueryinUpdatedatastructure.AtwebcrawlersettheLAST_VISITtimebeforetimeofpagesintheUpdate.Performedcrawling,resultsobtainedareshownintable4.Innormalcrawlingisatimeconsumingprocessbecausecrawlervisiteverywebpagetoknowallupdatedinformationinwebsite.Innormalcrawlingitvisitsatotalof7pages.Crawlertakes1385millisecondstovisitcompletesite.InproposedapproachcrawlervisitsDynamicupdatepageandupdatedwebpagesonly.Crawlertakeabout500millisecondswhenthereare3updates,about450millisecondswhentherearetwoupdate.WhentherearethreeupdatesinexperimentalWebsiteproposedsachemis4.83timefasterthanoldapproach.Withtwoupdatesproposedschemeis7.03timesfasterthanoldscheme.Graph1showstimetakenbywebcrawlertodownloadupdates.Innormalcrawlingcrawlervisits7pagestofindupdates.Butnumberofpagevisitisverysmallinproposedapproach.Whenthereisoneupdatecrawleronlyvisit2pagesandwhenthereare2updatescrawleronlyvisits3pages.Ifthereare3updatesinwebsitecrawlervisit4pages.V.CONCLUSIONWiththisapproachCrawlerfindnewupdatesonthewebserverusingDynamicwebpage.UsingthiscrawleryoucansendthequerieswithrequestedURLsandcanreducethemaximumcrawlertrafficovertheinternet.Itisfoundthatapproximately40.1trafficisduetothewebcrawler.Sothatusingthismethodyoucanreduce50trafficofthewebcrawlermeanshalfofthewebcrawlertraffici.e.20overtheinternet.Thefutureworkofthispaperwillbewecanreducethecrawlertrafficusingpagerankmethodandbyusingsomeparameterslikeaslastmodifiedparameter.Thisparametertellsthemodifieddateandtimeofthefetchedpage.LastmodifiedparametercanbeusedbythecrawlerforfetchingthefreshpagesfromtheWebsites.Inhighlevelterms,theMVCpatternmeansthatanMVCapplicationwillbesplitintoatleastthreepiecesModels,whichcontainorrepresentthedatathatusersworkwith.Thesecanbesimpleviewmodels,whichjustrepresentdatabeingtransferredbetweenviewsandcontrollersortheycanbedomainmodels,whichcontainthedatainabusinessdomainaswellastheoperations,transformations,andrulesformanipulatingthatdata.Views,whichareusedtorendersomepartofthemodelasaUI.Controllers,whichprocessincomingrequests,performoperationsonthemodel,andselectviewstorendertotheuser.Modelsarethedefinitionoftheuniverseyourapplicationworksin.Inabankingapplication,forexample,themodelrepresentseverythinginthebankthattheapplicationsupports,suchasaccounts,thegeneralledger,andcreditlimitsforcustomers,aswellastheoperationsthatcanbeusedtomanipulatethedatainthemodel,suchasdepositingfundsandmakingwithdrawalsfromtheaccounts.Themodelisalsoresponsibleforpreservingtheoverallstateandconsistencyofthedataforexample,makingsurethatalltransactionsareaddedtotheledger,andthataclientdoesntwithdrawmoremoneythanheisentitledtoormoremoneythanthebankhas.Modelsarealsodefinedbywhattheyarenotresponsiblefor.ModelsdontdealwithrenderingUIsorprocessingrequeststhosearetheresponsibilitiesofviewsandcontrollers.Viewscontainthelogicrequiredtodisplayelementsofthemodeltotheuser,andnothingmore.Theyhavenodirectawarenessofthemodelanddonotcommunicatewiththemodeldirectlyinanyway.Controllersarethegluebetweenviewsandthemodel.Requestscomeinfromtheclientandareservicedbythecontroller,whichselectsanappropriateviewtoshowtheuserand,ifrequired,anappropriateoperationtoperformonthemodel.EachpieceoftheMVCarchitectureiswelldefinedandselfcontained,whichisreferredtoastheseparationofconcerns.Thelogicthatmanipulatesthedatainthemodeliscontainedonlyinthemodel,thelogicthatdisplaysdataisonlyintheview,andthecodethathandlesuserrequestsandinputiscontainedonlyinthecontroller.Withacleardivisionbetweeneachofthepieces,yourapplicationwillbeeasiertomaintainandextendoveritslifetime,nomatterhowlargeitbecomes.

注意事项

本文(外文翻译--使用ASP.NET方法减少网络爬虫故障.doc)为本站会员(英文资料库)主动上传,人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知人人文库网([email protected]),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。

copyright@ 2015-2017 人人文库网网站版权所有
苏ICP备12009002号-5