外文翻译--使用ASP.NET方法减少网络爬虫故障.doc
英文原文AnApproachtoReduceWebCrawlerTrafficUsingAsp.NetNowdayssearchenginetransfersthewebdatafromoneplacetoanother.Theyworkonclientserverarchitecturewherethecentralservermanagesalltheinformation.Awebcrawlerisaprogramthatextractstheinformationoverthewebandsendsittothesearchengineforfurtherprocessing.Itisfoundthatmaximumtraffic(approximately40.1%)isduetothewebcrawler.TheproposedschemeshowshowwebcrawlercanreducethetrafficusingDynamicwebpageandHTTPGETrequestusingasp.net.I.INTRODUCTIONAllthesearchengineshavepowerfulcrawlersthatvisittheinternettimetotimeforextractingtheusefulinformationovertheinternet.Theretrievedpagesareindexedandstoredinthedatabaseasshowninfigure1.ActuallyInternetisadirectedgraph,orwebpageasanodeandhyperlinkasedge,sothesearchoperationcouldbeabstractedasaprocessoftraversingdirectedstructuregraph.Byfollowingthelinkedstructureoftheweb,wecantraverseanumberofnewpagesstartedfromstartingwebpages.Webcrawlersaredesignedtoretrievewebpagesandaddthemtheirrepresenttothelocalrepository/databases.Crawlerupdatestheirinformationonceaweek,sometimesitupdatemonthlyorquarterlyalso.Theycannotprovideup-to-dateversionoffrequentlyupdatedpages.Tocatchupfrequentupdateswithoutputtingalargeburdenoncontentprovider,webelieveretrievingandprocessingdatanearthedatasourceisinevitable.Currentlymorethanonesearchenginesareavailableinthemarket.Thatincreaseincomplexityofwebtraffichasrequiredthatwebaseourmodelonthenotationofwebrequestratherthanthewebpages.Webcrawleraresoftwaresystemsthatusethetextandlinksonwebpagestocreatesearchindexesofthepages,usingHTMLlinkstofolloworcrawltheconnectionsbetweenpages.Figure1,Architectureofawebsearchengine.TheWWWisawebofhyperlinkedrepositoryoftrillionsofhypertextdocuments9layingondifferentwebsites.WorldWideWeb(Web)trafficcontinuestoincreaseandisnowestimatedtobemorethan70percentofthetotaltrafficontheInternet.A.BasicCrawlingTerminologyWeneedtoknowsomebasicterminologyofwebcrawlerwhichplaysanimportantroleinimplementationofthewebcrawler.Seedpage:CrawlingmeanstotraversethewebrecursivelybypickedupthestartingURLfromthesetofURL.StartingURLisentrypointfromwhereallthecrawlersstarttheirsearchingprocedure.ThissetofURLknownasseedpage.Frontier:ThecrawlingprocedurestartswithagivenURL,ExtractingthelinkfromitandaddingthemtoanunvisitedlistofURL.thisunvisitedlistknownasfrontier.Thefrontierimplementedbyaqueue.ParserParsingmayimplysimplehyperlinked/URLextractionoritmayinvolvethemorecomplexprocessoftidyinguptheHTMLcontentinordertoanalyzetheHTMLtagtree.ThejobanyparseristoparsethefetchedpagestoextractthelistofnewURLfromitandreturnthenewunvisitedURLtothefrontier.TheBasicalgorithmofawebcrawlerisgivenbelow:StartReadtheURLfromtheseedURLCheckwhetherthedocumentsalreadydownloadedornotIfdocumentsarealreadydownload.Break.ElseAddittothefrontier.NowpicktheURLfromthatfrontierandextractthenewlinkfromitAddallthenewlyfoundURLintothefrontier.Continue.EndThemainfunctionofacrawleristoaddnewlinksintothefrontieraddtoselectanew.II.RELATEDWORKToreducethewebcrawlertrafficmanyresearchershascompletedtheirresearchinfollowingareas:InthisauthoruseddynamicwebpageswithHTTPGetrequestwithlastvisitparameter.Oneapproachistheuseofactivenetworktoreduceunnecessarycrawlertraffic.Theauthorproposedanapproachwhichusesthebandwidthcontrolsysteminordertoreducethewebcrawlertrafficovertheinternet.Oneistoplacethemobilecrawleratwebserver.Crawlercheckupdatesinwebsiteandsendthemtothesearchengineforindexing.DesignanewwebcrawlerusingVB.NETtechnology.III.PERFORMANCEMATRICESIntheimplementationofwebcrawlerwehavetakensomeassumptionsintotheaccountjustforsimplifyingalgorithmandimplementationandresults.RemoveaURLfromtheURLlistDeterminetheprotocolofunderlyinghostlikehttp,ftpetc.Downloadthecorrespondingdocument.Extractanylinkscontainedinit.AddtheselinksbacktotheURLlist.IV.SIMULATORThesimulatorhasbeendesignedtostudythebehaviorpatternofdifferentcrawlingalgorithmsfromthesamesetofURLs.WedesignedacrawlerusingVB.NETandASP.NETwindowapplicationprojecttypeourcrawlercanworkongloballyandlocally,meansitcangiveresultonintranetandinternet.ItuseURLinaformatlikehttp:/www.yahoo.comandsetalocationornameforsavingcrawlingresultsdatainMSAccessdatabase.Figure2,SnapshotofWebCrawler.SnapshotfortheuserinterfaceofWebCrawlerisrunningoneitherintranetorinternet.Fortakingaresultofcrawlerweuseawebsite.Ateachsimulationstep,theschedulerchoosesthetopmostwebsitefromthequeueofthewebsitesandsendsthissiteinformationtoamodulethatwillsimulatedownloadingpagesfromthewebsites.ForthissimulatorweusecrawlingpoliciesandsavethedatacollectedordownloadintheMS-Accessdatabasetablewithsomedatafield.CrawlingResult,TheCrawlingresultispresentintheformoftabledepictingtheresultintheformofrowandcolumnstheoutput,oftheCrawlerisshownasasnapshot.Figure3,SnapshotoftheCrawledResultDatabase.InthisproposedworkIanalyzedthatwhenwecrawledthewebsiteitdownloadedallthepagesofwebsite.SecondtimewhenIcrawledthesamesiteIfoundthatcrawlercrawledallthepagesagainwhilesiteupdatedonlyitsdynamicpagesandrarelyitsstaticpages.Forreducingthecrawlertrafficweproposetheuseofdynamicwebpagetoinformthewebcrawleraboutthenewpagesandupdatesonwebsite.Inexperimentweusewebsiteof7webpages.WebsitedeployedonASP.NETusingC#Language.DynamicwebpageiscodedinC#language.WebcrawleriscodedinVB.NET.LAST_VISITparameterpassedismillisecondtimeofsystem,returnbyC#,millisecondtimeismaintainedby“update”datastructure.Firstweperformcrawlingonwebsiteusingoldapproach.Thenweperformcrawlingusingproposedapproach.Whenweperformthewebcrawlingonwebsite.TheresultsobtainedshowninTable1.Totesttheproposedapproachwedirectthewebcrawlertodynamicwebpagedynamic.aspxandsetthelastvisittimeatURLandperformcrawling.Test1:UpdatetimeandURLofpagesindex,branchandpersonin“Update”datastructureatwebcrawlersettheLAST_VISITtimebeforetimeofpagesintheUpdate.Performedcrawling,resultsobtainedareshownintable2.Test2:UpdatetimeandURLofpageaboutin“Update”datastructure.AtwebcrawlersetstheLAST_VISITtime,beforethetimeofpagesintheupdate.Performedcrawling,resultsobtainedareshownintable3.Test3:UpdatetimeandURLofpagesserviceandqueryin“Update”datastructure.AtwebcrawlersettheLAST_VISITtimebeforetimeofpagesintheUpdate.Performedcrawling,resultsobtainedareshownintable4.Innormalcrawlingisatimeconsumingprocessbecausecrawlervisiteverywebpagetoknowallupdatedinformationinwebsite.Innormalcrawlingitvisitsatotalof7pages.Crawlertakes1385millisecondstovisitcompletesite.InproposedapproachcrawlervisitsDynamicupdatepageandupdatedwebpagesonly.Crawlertakeabout500millisecondswhenthereare3updates,about450millisecondswhentherearetwoupdate.WhentherearethreeupdatesinexperimentalWebsiteproposedsachemis4.83timefasterthanoldapproach.Withtwoupdatesproposedschemeis7.03timesfasterthanoldscheme.Graph1showstimetakenbywebcrawlertodownloadupdates.Innormalcrawlingcrawlervisits7pagestofindupdates.Butnumberofpagevisitisverysmallinproposedapproach.Whenthereisoneupdatecrawleronlyvisit2pagesandwhenthereare2updatescrawleronlyvisits3pages.Ifthereare3updatesinwebsitecrawlervisit4pages.V.CONCLUSIONWiththisapproachCrawlerfindnewupdatesonthewebserverusingDynamicwebpage.UsingthiscrawleryoucansendthequerieswithrequestedURLsandcanreducethemaximumcrawlertrafficovertheinternet.Itisfoundthatapproximately40.1%trafficisduetothewebcrawler.Sothatusingthismethodyoucanreduce50%trafficofthewebcrawler(meanshalfofthewebcrawlertraffici.e.20%overtheinternet).Thefutureworkofthispaperwillbewecanreducethecrawlertrafficusingpagerankmethodandbyusingsomeparameterslikeaslastmodifiedparameter.Thisparametertellsthemodifieddateandtimeofthefetchedpage.LastmodifiedparametercanbeusedbythecrawlerforfetchingthefreshpagesfromtheWebsites.Inhigh-levelterms,theMVCpatternmeansthatanMVCapplicationwillbesplitintoatleastthreepieces:Models,whichcontainorrepresentthedatathatusersworkwith.Thesecanbesimpleviewmodels,whichjustrepresentdatabeingtransferredbetweenviewsandcontrollers;ortheycanbedomainmodels,whichcontainthedatainabusinessdomainaswellastheoperations,transformations,andrulesformanipulatingthatdata.Views,whichareusedtorendersomepartofthemodelasaUI.Controllers,whichprocessincomingrequests,performoperationsonthemodel,andselectviewstorendertotheuser.Modelsarethedefinitionoftheuniverseyourapplicationworksin.Inabankingapplication,forexample,themodelrepresentseverythinginthebankthattheapplicationsupports,suchasaccounts,thegeneralledger,andcreditlimitsforcustomers,aswellastheoperationsthatcanbeusedtomanipulatethedatainthemodel,suchasdepositingfundsandmakingwithdrawalsfromtheaccounts.Themodelisalsoresponsibleforpreservingtheoverallstateandconsistencyofthedata;forexample,makingsurethatalltransactionsareaddedtotheledger,andthataclientdoesntwithdrawmoremoneythanheisentitledtoormoremoneythanthebankhas.Modelsarealsodefinedbywhattheyarenotresponsiblefor.ModelsdontdealwithrenderingUIsorprocessingrequeststhosearetheresponsibilitiesofviewsandcontrollers.Viewscontainthelogicrequiredtodisplayelementsofthemodeltotheuser,andnothingmore.Theyhavenodirectawarenessofthemodelanddonotcommunicatewiththemodeldirectlyinanyway.Controllersarethegluebetweenviewsandthemodel.Requestscomeinfromtheclientandareservicedbythecontroller,whichselectsanappropriateviewtoshowtheuserand,ifrequired,anappropriateoperationtoperformonthemodel.EachpieceoftheMVCarchitectureiswelldefinedandself-contained,whichisreferredtoastheseparationofconcerns.Thelogicthatmanipulatesthedatainthemodeliscontainedonlyinthemodel,thelogicthatdisplaysdataisonlyintheview,andthecodethathandlesuserrequestsandinputiscontainedonlyinthecontroller.Withacleardivisionbetweeneachofthepieces,yourapplicationwillbeeasiertomaintainandextendoveritslifetime,nomatterhowlargeitbecomes.