log4 java:java日志的实现
首先了解一下log4j
Log4j是Apache的一个开源项目,通过使用Log4j,我们可以控制日志信息输送的目的地是控制台、文件、GUI组件,甚至是套接口服务器、NT的事件记录器、UNIX Syslog守护进程等;我们也可以控制每一条日志的输出格式;通过定义每一条日志信息的级别,我们能够更加细致地控制日志的生成过程。最令人感兴趣的就是,这些可以通过一个配置文件来灵活地进行配置,而不需要修改应用的代码
1、首先导入Log4j依赖
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
2、log4j的配置文件,在resources项目资源包下面创建一个log4j.properties文件,将下面内容放入其中
### Log4j配置 ###
#定义log4j的输出级别和输出目的地(目的地可以自定义名称,和后面的对应)
#[ level ] , appenderName1 , appenderName2
log4j.rootLogger=DEBUG,console,file
#-----------------------------------#
#1 定义日志输出目的地为控制台
log4j.appender.console = org.apache.log4j.ConsoleAppender
log4j.appender.console.Target = System.out
log4j.appender.console.Threshold=DEBUG
####可以灵活地指定日志输出格式,下面一行是指定具体的格式 ###
#%c: 输出日志信息所属的类目,通常就是所在类的全名
#%m: 输出代码中指定的消息,产生的日志具体信息
#%n: 输出一个回车换行符,Windows平台为"/r/n",Unix平台为"/n"输出日志信息换行
log4j.appender.console.layout = org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=[%c]-%m%n
#-----------------------------------#
#2 文件大小到达指定尺寸的时候产生一个新的文件
log4j.appender.file = org.apache.log4j.RollingFileAppender
#日志文件输出目录
log4j.appender.file.File=log/info.log
#定义文件最大大小
log4j.appender.file.MaxFileSize=10mb
###输出日志信息###
#最低级别
log4j.appender.file.Threshold=ERROR
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=[%p][%d{yy-MM-dd}][%c]%m%n
#-----------------------------------#
#3 druid
log4j.logger.druid.sql=INFO
log4j.logger.druid.sql.DataSource=info
log4j.logger.druid.sql.Connection=info
log4j.logger.druid.sql.Statement=info
log4j.logger.druid.sql.ResultSet=info
#4 mybatis 显示SQL语句部分
log4j.logger.org.mybatis=DEBUG
#log4j.logger.cn.tibet.cas.dao=DEBUG
#log4j.logger.org.mybatis.common.jdbc.SimpleDataSource=DEBUG
#log4j.logger.org.mybatis.common.jdbc.ScriptRunner=DEBUG
#log4j.logger.org.mybatis.sqlmap.engine.impl.SqlMapClientDelegate=DEBUG
#log4j.logger.java.sql.Connection=DEBUG
log4j.logger.java.sql=DEBUG
log4j.logger.java.sql.Statement=DEBUG
log4j.logger.java.sql.ResultSet=DEBUG
log4j.logger.java.sql.PreparedStatement=DEBUG
3、在mybatis.xml核心配置文件中配置日志实现,默认的日志实现为
<settings>
<setting name="logImpl" value="STDOUT_LOGGING"/>
</settings>
我们这里使用下面的日志实现
<settings>
<setting name="logImpl" value="LOG4J"/>
</settings>
然后就可以在控制台看到日志效果了
[org.apache.ibatis.logging.LogFactory]-Logging initialized using 'class org.apache.ibatis.logging.log4j.Log4jImpl' adapter.
[org.apache.ibatis.logging.LogFactory]-Logging initialized using 'class org.apache.ibatis.logging.log4j.Log4jImpl' adapter.
[org.apache.ibatis.io.VFS]-Class not found: org.jboss.vfs.VFS
[org.apache.ibatis.io.JBoss6VFS]-JBoss 6 VFS API is not available in this environment.
[org.apache.ibatis.io.VFS]-Class not found: org.jboss.vfs.VirtualFile
[org.apache.ibatis.io.VFS]-VFS implementation org.apache.ibatis.io.JBoss6VFS is not valid in this environment.
[org.apache.ibatis.io.VFS]-Using VFS adapter org.apache.ibatis.io.DefaultVFS
[org.apache.ibatis.io.DefaultVFS]-Find JAR URL: file:/D:/IdeaProjects/ssmmybatisstudy02/target/classes/com/an/pojo
[org.apache.ibatis.io.DefaultVFS]-Not a JAR: file:/D:/IdeaProjects/ssmmybatisstudy02/target/classes/com/an/pojo
[org.apache.ibatis.io.DefaultVFS]-Reader entry: User.class
[org.apache.ibatis.io.DefaultVFS]-Listing file:/D:/IdeaProjects/ssmmybatisstudy02/target/classes/com/an/pojo
[org.apache.ibatis.io.DefaultVFS]-Find JAR URL: file:/D:/IdeaProjects/ssmmybatisstudy02/target/classes/com/an/pojo/User.class
[org.apache.ibatis.io.DefaultVFS]-Not a JAR: file:/D:/IdeaProjects/ssmmybatisstudy02/target/classes/com/an/pojo/User.class
[org.apache.ibatis.io.DefaultVFS]-Reader entry: ���� 1 =
[org.apache.ibatis.io.ResolverUtil]-Checking to see if class com.an.pojo.User matches criteria [is assignable to Object]
[org.apache.ibatis.datasource.pooled.PooledDataSource]-PooledDataSource forcefully closed/removed all connections.
[org.apache.ibatis.datasource.pooled.PooledDataSource]-PooledDataSource forcefully closed/removed all connections.
[org.apache.ibatis.datasource.pooled.PooledDataSource]-PooledDataSource forcefully closed/removed all connections.
[org.apache.ibatis.datasource.pooled.PooledDataSource]-PooledDataSource forcefully closed/removed all connections.
[org.apache.ibatis.transaction.jdbc.JdbcTransaction]-Opening JDBC Connection
[org.apache.ibatis.datasource.pooled.PooledDataSource]-Created connection 183284570.
[org.apache.ibatis.transaction.jdbc.JdbcTransaction]-Setting autocommit to false on JDBC Connection [com.mysql.jdbc.JDBC4Connection@aecb35a]
[com.an.dao.UserDao.selectUser]-==> Preparing: select * from user;
[com.an.dao.UserDao.selectUser]-==> Parameters:
[com.an.dao.UserDao.selectUser]-<== Total: 6
User{id=1, name='anye', password='anyebaobao'}
User{id=2, name='张三', password='abcdef'}
User{id=3, name='李四', password='987654'}
User{id=4, name='qinjiang', password='123456'}
User{id=5, name='安夜', password='123456'}
User{id=6, name='anye', password='123'}
[org.apache.ibatis.transaction.jdbc.JdbcTransaction]-Resetting autocommit to true on JDBC Connection [com.mysql.jdbc.JDBC4Connection@aecb35a]
[org.apache.ibatis.transaction.jdbc.JdbcTransaction]-Closing JDBC Connection [com.mysql.jdbc.JDBC4Connection@aecb35a]
[org.apache.ibatis.datasource.pooled.PooledDataSource]-Returned connection 183284570 to pool.
Process finished with exit code 0
分页的实现
sql语句中分页的实现
select * from 表名 limit #{startindex},#{pagesize};
#startindex:起始位置,默认是从0开始
#pagesize:页面大小
当前页面为currentPage = (currentPage-1)* pageSize
使用limit实现分页
1、首先编写Dao接口
//查询全部用户并分页
List<User> selectUserLimit(Map<String,Integer>);
2、编写对应的mapper映射方法
因为之前设置了映射类型
<resultMap id="UserMap" type="User">
<id column="id" property="id"/>
<result column="name" property="name"/>
<result column="pwd" property="password"/>
</resultMap>
所以这里用resultMap
<select id="selectUserLimit" parameterType="Map" resultMap="UserMap">
select * from user limit #{startIndex},#{pagesize}
</select>
3、编写测试方法
public void selectUserLimit(){
SqlSessionFactory factory = MyBatisUtils.getSqlSessionFactory();
SqlSession sqlSession = factory.openSession();
UserDao mapper = sqlSession.getMapper(UserDao.class);
int currentPage = 2;//当前在第2页
int pageSize = 3;//每页有3条数据
Map<String, Integer> map = new HashMap<String, Integer>();
map.put("startIndex",(currentPage-1)*pageSize);
map.put("pageSize",pageSize);
List<User> users = mapper.selectUserLimit(map);
for (User user : users) {
System.out.println(user);
}
}
查询结果
[com.an.dao.UserDao.selectUserLimit]-==> Preparing: select * from user limit ?,?
[com.an.dao.UserDao.selectUserLimit]-==> Parameters: 3(Integer), 3(Integer)
[com.an.dao.UserDao.selectUserLimit]-<== Total: 3
User{id=4, name='qinjiang', password='123456'}
User{id=5, name='安夜', password='123456'}
User{id=6, name='anye', password='123'}