go与tail(github.com/hpcloud/tail)

参考

https://github.com/hpcloud/tail


 linux中的tail命令用途是依照要求将指定的文件的最后部分输出到标准设备,通常是终端,通俗讲来,就是把某个档案文件的最后几行显示到终端上,假设该档案有更新,tail会自己主动刷新,确保你看到最新的档案内容。

这个包实现了类似的功能。

t, err := tail.TailFile("/var/log/nginx.log", tail.Config{Follow: true})
if err != nil {
	fmt.Println(err)  //如果文件不存在,会阻塞并打印Waiting for my.log to appear...,直到文件被创建
}
for line := range t.Lines {
    fmt.Println(line.Text)
}

 tail.TailFile()函数开启goroutine去读取文件,通过channel格式的t.lines传递内容。

其他配置项

type Config struct {
	// File-specifc
	Location    *SeekInfo // 指定开始读取的位置
	ReOpen      bool      //true则文件被删掉阻塞等待新建该文件,false则文件被删掉时程序结束
	MustExist   bool      //true则没有找到文件就报错并结束,false则没有找到文件就阻塞保持住
	Poll        bool      // 使用Linux的Poll函数,poll的作用是把当前的文件指针挂到等待队列
	Pipe        bool      // Is a named pipe (mkfifo)
	RateLimiter *ratelimiter.LeakyBucket

	// Generic IO
	Follow      bool //true则一直阻塞并监听指定文件,false则一次读完就结束程序
	MaxLineSize int  // If non-zero, split longer lines into multiple lines

	// Logger, when nil, is set to tail.DefaultLogger
	// To disable logging: set field to tail.DiscardingLogger
	Logger logger
}

location类似os.seek,举例 

seek := &tail.SeekInfo{Offset:5,Whence:os.SEEK_CUR}
t, err := tail.TailFile("my.log", tail.Config{Location:seek})

offset //Whence之后,再偏移n个字符开始读,偏移量大于一行内容时换行继续计算(换行符\n计算在内占两个字符,但是不打印出来)
SEEK_SET int = 0 // 跳到文件的开始位置
SEEK_CUR int = 1 // 跳到文件的当前位置(好像和设置为开始位置一样)
SEEK_END int = 2 // 跳到文件最后,不读取文件里原有的内容,从新加入的开始读

t.Tell()返回当前offset

./ollama serve 2025/07/29 23:00:09 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/3C600/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-07-29T23:00:09.615+08:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-07-29T23:00:09.615+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-07-29T23:00:09.615+08:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2025-07-29T23:00:09.616+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-07-29T23:00:09.626+08:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-07-29T23:00:09.626+08:00 level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected" time=2025-07-29T23:00:09.626+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-07-29T23:00:09.626+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.6 GiB" available="17.2 GiB" [GIN] 2025/07/29 - 23:00:15 | 200 | 276.91µs | 127.0.0.1 | HEAD "/" [GIN] 2025/07/29 - 23:00:15 | 404 | 252.226µs | 127.0.0.1 | POST "/api/show"
最新发布
07-30
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值