html文档visted,css中:visited怎么不起作用?

css中:visited怎么不起作用?下面本篇文章给大家介绍一下:visited不起作用的可能原因。有一定的参考价值,有需要的朋友可以参考一下,希望对大家有所帮助。

7980ef41ea7dface78180b64bd7a70c6.png

css中:visited怎么不起作用?

1、可能顺序不同

css定义超链接四个状态也是有顺序的a:link {} /* 未访问的链接 */

a:visited {} /* 已访问的链接 */

a:hover {} /* 当有鼠标悬停在链接上 */

a:active {} /* 被选择的链接 */

对于这4个伪类的设置,有一点需要特别注意,那就是它们的先后顺序,要让它们遵守一个顺序原则,也就是link ~ visited ~ hover ~ active 。如果把顺序搞错了,会带来意想不到的错误,如果是初学者,可以私下练习一下。

2、可能a href中的链接失效

链接无效会使a的visited设置无效, 这是因为链接点击后没有产生网页访问的历史记录, 所以链接点击后无效

***如果代码如下,即a href=“#”链接一致,那么会导致visited点完后全部变色a:visited {color: #FFFF00; text-decoration:none;}

AA

BB

更多web前端开发知识,请查阅 HTML中文网 !!

请仔细阅读代码,结合相关知识,在 Begin-End 区域内进行代码补充,编写一个爬虫实现深度优先爬虫,爬取的网站为 www.baidu.com。from bs4 import BeautifulSoup import requests import re #自定义队列类 class linkQuence: def __init__(self): # 已访问的url集合 self.visted = [] # 待访问的url集合 self.unVisited = [] # 获取访问过的url队列 def getVisitedUrl(self): return self.visted # 获取未访问的url队列 def getUnvisitedUrl(self): return self.unVisited # 添加到访问过得url队列中 def addVisitedUrl(self, url): self.visted.append(url) # 移除访问过得url def removeVisitedUrl(self, url): self.visted.remove(url) # 未访问过得url出队列 def unVisitedUrlDeQuence(self): try: return self.unVisited.pop() except: return None # 保证每个url只被访问一次 def addUnvisitedUrl(self, url): if url != "" and url not in self.visted and url not in self.unVisited: self.unVisited.insert(0, url) # 获得已访问的url数目 def getVisitedUrlCount(self): return len(self.visted) # 获得未访问的url数目 def getUnvistedUrlCount(self): return len(self.unVisited) # 判断未访问的url队列是否为空 def unVisitedUrlsEnmpy(self): return len(self.unVisited) == 0 class MyCrawler: def __init__(self, seeds): # 初始化当前抓取的深度 self.current_deepth = 1 # 使用种子初始化url队列 self.linkQuence = linkQuence() if isinstance(seeds, str): self.linkQuence.addUnvisitedUrl(seeds) if isinstance(seeds, list): for i in seeds: self.linkQuence.addUnvisitedUrl(i) # 抓取过程主函数 def crawling(self, seeds, crawl_deepth): # ********** Begin **********# # ********** End **********# # 获取源码中得超链接 def getHyperLinks(self, url): # ********** Begin **********# # ********** End **********# # 获取网页源码 def getPageSource(self, url): # ********** Begin **********# # ********** End **********# def main(seeds="http://www.baidu.com", crawl_deepth=3): craw = MyCrawler(seeds) craw.crawling(seeds, crawl_deepth) return craw.linkQuence.getVisitedUrl() 下方预期输出随着网页的更新,中间的数据会有一定的变化。 预期输出: Add the seeds url ['http://www.baidu.com'] to the unvisited url list Pop out one url "http://www.baidu.com" from unvisited url list Get 10 new links Visited url count: 1 Visited deepth: 1 10 unvisited links: Pop out one url "http://news.baidu.com" from unvisited url list Get 52 new links Visited url count: 2 Visited deepth: 2 Pop out one url "http://www.hao123.com" from unvisited url list Get 311 new links Visited url count: 3 Visited deepth: 2 Pop out one url "http://map.baidu.com" from unvisited url list Get 0 new links Visited url count: 4 Visited deepth: 2 Pop out one url "http://v.baidu.com" from unvisited url list Get 566 new links Visited url count: 5 Visited deepth: 2 Pop out one url "http://tieba.baidu.com" from unvisited url list Get 21 new links Visited url count: 6 Visited deepth: 2 Pop out one url "http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2f%3fbdorz_come%3d1" from unvisited url list Get 1 new links Visited url count: 7 Visited deepth: 2 Pop out one url "http://home.baidu.com" from unvisited url list Get 27 new links Visited url count: 8 Visited deepth: 2 Pop out one url "http://ir.baidu.com" from unvisited url list Get 22 new links Visited url count: 9 Visited deepth: 2 Pop out one url "http://www.baidu.com/duty/" from unvisited url list Get 1 new links Visited url count: 10 Visited deepth: 2 Pop out one url "http://jianyi.baidu.com/" from unvisited url list Get 4 new links Visited url count: 11 Visited deepth: 2 2 unvisited links: Pop out one url "http://baozhang.baidu.com/guarantee/" from unvisited url list Get 0 new links Visited url count: 12 Visited deepth: 3 Pop out one url "http://ir.baidu.com/phoenix.zhtml?c=188488&p=irol-irhome" from unvisited url list Get 22 new links Visited url count: 13 Visited deepth: 3 22 unvisited links: 注意:右侧预期输出为部分输出。
最新发布
06-18
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值