




已阅读5页,还剩3页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
英文原文AnApproachtoReduceWebCrawlerTrafficUsingAsp.NetNowdayssearchenginetransfersthewebdatafromoneplacetoanother.Theyworkonclientserverarchitecturewherethecentralservermanagesalltheinformation.Awebcrawlerisaprogramthatextractstheinformationoverthewebandsendsittothesearchengineforfurtherprocessing.Itisfoundthatmaximumtraffic(approximately40.1%)isduetothewebcrawler.TheproposedschemeshowshowwebcrawlercanreducethetrafficusingDynamicwebpageandHTTPGET.I.INTRODUCTIONAllthesearchengineshavepowerfulcrawlersthatvisittheinternettimetotimeforextractingtheusefulinformationovertheinternet.Theretrievedpagesareindexedandstoredinthedatabaseasshowninfigure1.ActuallyInternetisadirectedgraph,orwebpageasanodeandhyperlinkasedge,sothesearchoperationcouldbeabstractedasaprocessoftraversingdirectedstructuregraph.Byfollowingthelinkedstructureoftheweb,wecantraverseanumberofnewpagesstartedfromstartingwebpages.Webcrawlersaredesignedtoretrievewebpagesandaddthemtheirrepresenttothelocalrepository/databases.Crawlerupdatestheirinformationonceaweek,sometimesitupdatemonthlyorquarterlyalso.Theycannotprovideup-to-dateversionoffrequentlyupdatedpages.Tocatchupfrequentupdateswithoutputtingalargeburdenoncontentprovider,webelieveretrievingandprocessingdatanearthedatasourceisinevitable.Currentlymorethanonesearchenginesareavailableinthemarket.Thatincreaseincomplexityofwebtraffichasrequiredthatwebaseourmodelonthenotationofwebrequestratherthanthewebpages.Webcrawleraresoftwaresystemsthatusethetextandlinksonwebpagestocreatesearchindexesofthepages,usingHTMLlinkstofolloworcrawltheconnectionsbetweenpages.Figure1,Architectureofawebsearchengine.TheWWWisawebofhyperlinkedrepositoryoftrillionsofhypertextdocuments9layingondifferentwebsites.WorldWideWeb(Web)trafficcontinuestoincreaseandisnowestimatedtobemorethan70percentofthetotaltrafficontheInternet.A.BasicCrawlingTerminologyWeneedtoknowsomebasicterminologyofwebcrawlerwhichplaysanimportantroleinimplementationofthewebcrawler.Seedpage:CrawlingmeanstotraversethewebrecursivelybypickedupthestartingURLfromthesetofURL.StartingURLisentrypointfromwhereallthecrawlersstarttheirsearchingprocedure.ThissetofURLknownasseedpage.Frontier:ThecrawlingprocedurestartswithagivenURL,ExtractingthelinkfromitandaddingthemtoanunvisitedlistofURL.thisunvisitedlistknownasfrontier.Thefrontierimplementedbyaqueue.ParserParsingmayimplysimplehyperlinked/URLextractionoritmayinvolvethemorecomplexprocessoftidyinguptheHTMLcontentinordertoanalyzetheHTMLtagtree.ThejobanyparseristoparsethefetchedpagestoextractthelistofnewURLfromitandreturnthenewunvisitedURLtothefrontier.TheBasicalgorithmofawebcrawlerisgivenbelow:StartReadtheURLfromtheseedURLCheckwhetherthedocumentsalreadydownloadedornotIfdocumentsarealreadydownload.Break.ElseAddittothefrontier.NowpicktheURLfromthatfrontierandextractthenewlinkfromitAddallthenewlyfoundURLintothefrontier.Continue.EndThemainfunctionofacrawleristoaddnewlinksintothefrontieraddtoselectanew.II.RELATEDWORKToreducethewebcrawlertrafficmanyresearchershascompletedtheirresearchinfollowingareas:InthisauthoruseddynamicwebpageswithHTTPGetrequestwithlastvisitparameter.Oneapproachistheuseofactivenetworktoreduceunnecessarycrawlertraffic.Theauthorproposedanapproachwhichusesthebandwidthcontrolsysteminordertoreducethewebcrawlertrafficovertheinternet.Oneistoplacethemobilecrawleratwebserver.Crawlercheckupdatesinwebsiteandsendthemtothesearchengineforindexing.DesignanewwebcrawlerusingVB.NETtechnology.III.PERFORMANCEMATRICESIntheimplementationofwebcrawlerwehavetakensomeassumptionsintotheaccountjustforsimplifyingalgorithmandimplementationandresults.RemoveaURLfromtheURLlistDeterminetheprotocolofunderlyinghostlikehttp,ftpetc.Downloadthecorrespondingdocument.Extractanylinkscontainedinit.AddtheselinksbacktotheURLlist.IV.SIMULATORThesimulatorhasbeendesignedtostudythebehaviorpatternofdifferentcrawlingalgorithmsfromthesamesetofURLs.WedesignedacrawlerusingVB.NETandASP.NETwindowapplicationprojecttypeourcrawlercanworkongloballyandlocally,meansitcangiveresultonintranetandinternet.ItuseURLinaformatlikeandsetalocationornameforsavingcrawlingresultsdatainMSAccessdatabase.Figure2,SnapshotofWebCrawler.SnapshotfortheuserinterfaceofWebCrawlerisrunningoneitherintranetorinternet.Fortakingaresultofcrawlerweuseawebsite.Ateachsimulationstep,theschedulerchoosesthetopmostwebsitefromthequeueofthewebsitesandsendsthissiteinformationtoamodulethatwillsimulatedownloadingpagesfromthewebsites.ForthissimulatorweusecrawlingpoliciesandsavethedatacollectedordownloadintheMS-Accessdatabasetablewithsomedatafield.CrawlingResult,TheCrawlingresultispresentintheformoftabledepictingtheresultintheformofrowandcolumnstheoutput,oftheCrawlerisshownasasnapshot.Figure3,SnapshotoftheCrawledResultDatabase.InthisproposedworkIanalyzedthatwhenwecrawledthewebsiteitdownloadedallthepagesofwebsite.SecondtimewhenIcrawledthesamesiteIfoundthatcrawlercrawledallthepagesagainwhilesiteupdatedonlyitsdynamicpagesandrarelyitsstaticpages.Forreducingthecrawlertrafficweproposetheuseofdynamicwebpagetoinformthewebcrawleraboutthenewpagesandupdatesonwebsite.Inexperimentweusewebsiteof7webpages.WebsitedeployedonASP.NETusingC#Language.DynamicwebpageiscodedinC#language.WebcrawleriscodedinVB.NET.LAST_VISITparameterpassedismillisecondtimeofsystem,returnbyC#,millisecondtimeismaintainedby“update”datastructure.Firstweperformcrawlingonwebsiteusingoldapproach.Thenweperformcrawlingusingproposedapproach.Whenweperformthewebcrawlingonwebsite.TheresultsobtainedshowninTable1.Totesttheproposedapproachwedirectthewebcrawlertodynamicwebpagedynamic.aspxandsetthelastvisittimeatURLandperformcrawling.Test1:UpdatetimeandURLofpagesindex,branchandpersonin“Update”datastructureatwebcrawlersettheLAST_VISITtimebeforetimeofpagesintheUpdate.Performedcrawling,resultsobtainedareshownintable2.Test2:UpdatetimeandURLofpageaboutin“Update”datastructure.AtwebcrawlersetstheLAST_VISITtime,beforethetimeofpagesintheupdate.Performedcrawling,resultsobtainedareshownintable3.Test3:UpdatetimeandURLofpagesserviceandqueryin“Update”datastructure.AtwebcrawlersettheLAST_VISITtimebeforetimeofpagesintheUpdate.Performedcrawling,resultsobtainedareshownintable4.Innormalcrawlingisatimeconsumingprocessbecausecrawlervisiteverywebpagetoknowallupdatedinformationinwebsite.Innormalcrawlingitvisitsatotalof7pages.Crawlertakes1385millisecondstovisitcompletesite.InproposedapproachcrawlervisitsDynamicupdatepageandupdatedwebpagesonly.Crawlertakeabout500millisecondswhenthereare3updates,about450millisecondswhentherearetwoupdate.WhentherearethreeupdatesinexperimentalWebsiteproposedsachemis4.83timefasterthanoldapproach.Withtwoupdatesproposedschemeis7.03timesfasterthanoldscheme.Graph1showstimetakenbywebcrawlertodownloadupdates.Innormalcrawlingcrawlervisits7pagestofindupdates.Butnumberofpagevisitisverysmallinproposedapproach.Whenthereisoneupdatecrawleronlyvisit2pagesandwhenthereare2updatescrawleronlyvisits3pages.Ifthereare3updatesinwebsitecrawlervisit4pages.V.CONCLUSIONWiththisapproachCrawlerfindnewupdatesonthewebserverusingDynamicwebpage.UsingthiscrawleryoucansendthequerieswithrequestedURLsandcanreducethemaximumcrawlertrafficovertheinternet.Itisfoundthatapproximately40.1%trafficisduetothewebcrawler.Sothatusingthismethodyoucanreduce50%trafficofthewebcrawler(meanshalfofthewebcrawlertraffici.e.20%overtheinternet).Thefutureworkofthispaperwillbewecanreducethecrawlertrafficusingpagerankmethodandbyusingsomeparameterslikeaslastmodifiedparameter.Thisparametertellsthemodifieddateandtimeofthefetchedpage.LastmodifiedparametercanbeusedbythecrawlerforfetchingthefreshpagesfromtheWebsites.Inhigh-levelterms,theMVCpatternmeansthatanMVCapplicationwillbesplitintoatleastthreepieces:Models,whichcontainorrepresentthedatathatusersworkwith.Thesecanbesimpleviewmodels,whichjustrepresentdatabeingtransferredbetweenviewsandcontrollers;ortheycanbedomainmodels,whichcontainthedatainabusinessdomainaswellastheoperations,transformations,andrulesformanipulatingthatdata.Views,whichareusedtorendersomepartofthemodelasaUI.Controllers,whichprocessincomingrequests,performoperationsonthemodel,andselectviewstorendertotheuser.Modelsarethedefinitionoftheuniverseyourapplicationworksin.Inabankingapplication,forexample,themodelrepresentseverythinginthebankthattheapplicationsupports,suchasaccounts,thegeneralledger,andcreditlimitsforcustomers,aswellastheoperationsthatcanbeusedtomanipulatethedatainthemodel,suchasdepositingfundsandmakingwithdrawalsfromtheaccounts.Themodelisalsoresponsibleforpreservingtheoverallstateandconsistencyofthedata;forexample,makingsurethatalltransactionsareaddedtotheledger,andthataclientdoesntwithdrawmoremoneythanheisentitl
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 潮州供热可行性研究报告
- 药厂液体制剂监控员工作总结模版
- 预防呼吸道传染病
- 学前儿童发展 课件 第12章 学前儿童社会性的发展
- 妇幼健康计划-妇幼健康计划总结模版
- 业务员毕业生实习总结模版
- 2025年护士年度个人工作总结模版
- 大学生职业规划大赛《生物科学专业》生涯发展展示
- 六班级的上学期美术组工作总结模版
- 英格玛国企面试题目及答案
- 公司经营管理手册目录
- 基础会计练习题及答案
- 限高杆施工图 2
- 5万吨钢筋加工配送中心项目
- 初中数学北师大九年级下册 直角三角形的边角关系谢荣华 教学设计《锐角三角函数》
- 机房空调升级改造方案
- 老年患者营养支持途径及配方选择课件
- 二环庚二烯(2,5-降冰片二烯)的理化性质及危险特性表
- 【审计工作底稿模板】FK长期借款
- arcgis网络分析.
- 国家最新特种设备目录
评论
0/150
提交评论