arthurkiller / rollingwriter Goto Github PK
View Code? Open in Web Editor NEWRolling writer is an IO util for auto rolling write in go.
License: MIT License
Rolling writer is an IO util for auto rolling write in go.
License: MIT License
▶ go version
go version go1.12.4 darwin/amd64
---
GoLand 2019.1.1
Build #GO-191.6707.68, built on April 17, 2019
JRE: 1.8.0_202-release-1483-b44 x86_64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
macOS 10.14.4
go run -race demo/writer.go
WARNING: DATA RACE
Read at 0x00c00009e0a8 by goroutine 25:
github.com/arthurkiller/rollingWriter.(*manager).GenLogFileName()
/Users/chaomai/Documents/workspace/github/rollingWriter/manager.go:140 +0x88
github.com/arthurkiller/rollingWriter.NewManager.func1()
/Users/chaomai/Documents/workspace/github/rollingWriter/manager.go:43 +0x53
github.com/robfig/cron.FuncJob.Run()
/Users/chaomai/Documents/workspace/go/pkg/mod/github.com/robfig/cron@v0.0.0-20180505203441-b41be1df6967/cron.go:92 +0x34
github.com/robfig/cron.(*Cron).runWithRecovery()
/Users/chaomai/Documents/workspace/go/pkg/mod/github.com/robfig/cron@v0.0.0-20180505203441-b41be1df6967/cron.go:165 +0x68
Previous write at 0x00c00009e0a8 by goroutine 22:
github.com/arthurkiller/rollingWriter.(*manager).GenLogFileName()
/Users/chaomai/Documents/workspace/github/rollingWriter/manager.go:145 +0x243
github.com/arthurkiller/rollingWriter.NewManager.func1()
/Users/chaomai/Documents/workspace/github/rollingWriter/manager.go:43 +0x53
github.com/robfig/cron.FuncJob.Run()
/Users/chaomai/Documents/workspace/go/pkg/mod/github.com/robfig/cron@v0.0.0-20180505203441-b41be1df6967/cron.go:92 +0x34
github.com/robfig/cron.(*Cron).runWithRecovery()
/Users/chaomai/Documents/workspace/go/pkg/mod/github.com/robfig/cron@v0.0.0-20180505203441-b41be1df6967/cron.go:165 +0x68
Goroutine 25 (running) created at:
github.com/robfig/cron.(*Cron).run()
/Users/chaomai/Documents/workspace/go/pkg/mod/github.com/robfig/cron@v0.0.0-20180505203441-b41be1df6967/cron.go:199 +0xa65
Goroutine 22 (finished) created at:
github.com/robfig/cron.(*Cron).run()
/Users/chaomai/Documents/workspace/go/pkg/mod/github.com/robfig/cron@v0.0.0-20180505203441-b41be1df6967/cron.go:199 +0xa65
初步看了,(*manager).GenLogFileName()
最终是由github.com/robfig/cron/cron.go执行的,可见cron中并无无任何同步机制。
极端情况下,如果(*manager).GenLogFileName()
的执行时间长于调度间隔,那同一时刻就会有两个goroutine在执行(*manager).GenLogFileName()
,m.startAt
就可能会有问题了。
type manager struct {
// ...
lock sync.Mutex
}
// ...
// GenLogFileName generate the new log file name, filename should be absolute path
func (m *manager) GenLogFileName(c *Config) (filename string) {
m.lock.Lock()
// ...
m.lock.Lock()
// reset the start time to now
m.startAt = time.Now()
return
}
应用重启后最大保存的文件数就不正确了,好像又重新开始算了。这样随着应用的不断重启,可能保存的文件数越来越多。
async模式中,在上层日志组件调用Write时,会通过golang的sync.pool中获取一个1M的[]byte。
golang gc pacer在某些情况下可能出现运行速率的预估错误(推测和这个buffer的大小和缓存命中率有关),导致较晚开始gc。此时除了25%的并发标记P,其余P调度的goroutine如果需要写日志时,若需要分配[]byte对象,由于[]byte过大,达到了1M,分配速率非常快,不得不进入Mark Assist,或者迅速进入下一次GC,导致大量的调度延迟,进而使程序的毛刺非常严重。
在尝试缩小这个[]byte对象的大小之后,程序的毛刺现象有非常明显的缓解。
我有两个提议:
归档后,还是会往归档文件中持续写入日志
更改为记录文件大小并切换
刚开始看 zap 的源码,还未找到如何使用 rollingWriter 同 zap 搭配。
若有示例就更棒了,感谢感谢。
@arthurkiller Hello
zap.logger write error
2023-04-26 19:45:46.770523613 +0800 CST m=+6.113431489 write error: can't rename log file: rename / /-2023-04-26T11-45-46.770: device or resource busy
{"level":"INFO","ts":"2023-04-26T19:45:46.770+0800","caller":"core/task_manager.go:78","msg":"TaskManager:EsSearch:start","ServerName":"monitor-log-metric-calc"}
2023-04-26 19:45:46.770685539 +0800 CST m=+6.113593399 write error: can't rename log file: rename / /-2023-04-26T11-45-46.770: device or resource busy
Environment:
Os:ubantu 5.4.0-72-generic #80~18.04.1-Ubuntu SMP Mon Apr 12 23:26:25 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
go version:go1.19
Log Config:
var logger *zap.Logger
func Setup(config *config.Config) *zap.Logger {
var filepath = fmt.Sprintf("%s/%s",
config.Log.LogSavePath,
config.Log.LogFileName)
var coreArr []zapcore.Core
//获取编码器
//encoderConfig := zap.NewProductionConfig()
encoderConfig := zap.NewProductionEncoderConfig() //NewJSONEncoder()输出json格式,NewConsoleEncoder()输出普通文本格式
encoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder //指定时间格式
//encoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder //按级别显示不同颜色,不需要的话取值zapcore.CapitalLevelEncoder就可以了
encoderConfig.EncodeLevel = zapcore.CapitalLevelEncoder //按级别显示不同颜色,不需要的话取值zapcore.CapitalLevelEncoder就可以了
//encoderConfig.EncodeCaller = zapcore.FullCallerEncoder //显示完整文件路径
//encoder := zapcore.NewConsoleEncoder(encoderConfig)
encoder := zapcore.NewJSONEncoder(encoderConfig)
//日志级别
highPriority := zap.LevelEnablerFunc(func(lev zapcore.Level) bool { //error级别
return lev >= zap.ErrorLevel
})
lowPriority := zap.LevelEnablerFunc(func(lev zapcore.Level) bool { //info和debug级别,debug级别是最低的
return lev < zap.ErrorLevel && lev >= zap.DebugLevel
})
//info文件writeSyncer
infoFileWriteSyncer := zapcore.AddSync(&lumberjack.Logger{
Filename: filepath, //日志文件存放目录,如果文件夹不存在会自动创建
MaxSize: config.Log.LogMaxSize, //文件大小限制,单位MB
MaxBackups: config.Log.LogMaxBackups, //最大保留日志文件数量
MaxAge: config.Log.LogMaxAge, //日志文件保留天数
Compress: false, //是否压缩处理
})
infoFileCore := zapcore.NewCore(encoder, zapcore.NewMultiWriteSyncer(infoFileWriteSyncer, zapcore.AddSync(os.Stdout)), lowPriority) //第三个及之后的参数为写入文件的日志级别,ErrorLevel模式只记录error级别的日志
//error文件writeSyncer
errorFileWriteSyncer := zapcore.AddSync(&lumberjack.Logger{
Filename: filepath, //日志文件存放目录
MaxSize: config.Log.LogMaxSize, //文件大小限制,单位MB
MaxBackups: config.Log.LogMaxBackups, //最大保留日志文件数量
MaxAge: config.Log.LogMaxAge, //日志文件保留天数
Compress: false, //是否压缩处理
})
errorFileCore := zapcore.NewCore(encoder, zapcore.NewMultiWriteSyncer(errorFileWriteSyncer, zapcore.AddSync(os.Stdout)), highPriority) //第三个及之后的参数为写入文件的日志级别,ErrorLevel模式只记录error级别的日志
coreArr = append(coreArr, infoFileCore)
coreArr = append(coreArr, errorFileCore)
logger = zap.New(zapcore.NewTee(coreArr...), zap.AddCaller(), zap.Fields(
zap.String("ServerName", ServerName))) //zap.AddCaller()为显示文件名和行号,可省略,增加预定义服务字段
//logger.Info("hello info")
//logger.Debug("hello debug")
//logger.Error("hello error")
return logger
}
// Debug : Level 0
func GetSuger() *zap.SugaredLogger {
return logger.Sugar()
}
在基于TimeRolling和VolumeRolling 切割的时候,容易阻塞
switch c.RollingPolicy {
default:
fallthrough
case WithoutRolling:
return m, nil
case TimeRolling:
if err := m.cr.AddFunc(c.RollingTimePattern, func() {
m.fire <- m.GenLogFileName(c) // 长时间没有写日志,会阻塞
}); err != nil {
return nil, err
}
m.cr.Start()
case VolumeRolling:
m.ParseVolume(c)
m.wg.Add(1)
go func() {
timer := time.Tick(time.Duration(Precision) * time.Second)
filepath := LogFilePath(c)
var file *os.File
var err error
m.wg.Done()
for {
select {
case <-m.context:
return
case <-timer:
if file, err = os.Open(filepath); err != nil {
continue
}
if info, err := file.Stat(); err == nil && info.Size() > m.thresholdSize {
m.fire <- m.GenLogFileName(c) // 长时间没有写日志,会阻塞
}
file.Close()
}
}
}()
m.wg.Wait()
}
return m, nil
针对VolumeRolling而言,一旦阻塞,会导致日志归档到错误的文件中。我觉得可以通过如下方式来修复一下:
func (w *LockedWriter) Write(b []byte) (n int, err error) {
w.Lock()
DoWrite:// 解决fire阻塞带来的文件错位问题
select {
case filename := <-w.fire:
if err := w.Reopen(filename); err != nil {
return 0, err
}
goto DoWrite
default:
}
n, err = w.file.Write(b)
w.Unlock()
return
}
你好,我看最近有一些改动,但tag都是2019年的。能否打一下新的tag?
Doing operations like rename or delete with open files fails on windows, I created a PR to fix it.
We already have []byte allocated, since the signature of Write([]byte) method, why use _asyncBufferPool? It add additional copy which seems unnecessary?
我设置的policy是VolumeRolling,最大大小是100k,试了lock模式和async模式,都有问题,归档的日志大小有0字节的、3mb的,573kb的,很不稳
例如lumberjack中的MaxAge,作为日志的预留保留天数
控制台输出:Failed to write to log, write logs/server.exe.log: file already closed
New Version v2.0 is comeing out
New Version v2.0 is coming out
https://github.com/arthurkiller/rollingwriter/blob/master/writer.go#L365
func (w *BufferWriter) Write(b []byte) (int, error) {
// ...
buf := append(*w.buf, b...)
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&w.buf)), (unsafe.Pointer)(&buf))
Hi,我使用到了贵包rollingwriter的buffer writer,测试时候发现会有丢日志的情况(高并发场景)。初步分析发现应该是这里的buffer append 非并发安全导致的。
比如buffer里已有 line1,并发两个Write,分别写入 line2、line3,竞争激烈的情况下,StorePointer可能会出现 [line1, line3] 覆盖 [line1, line2],从而导致line2丢失。
测试时加上了给这行代码加了个mutex or trylock,发现可以解决丢日志的问题。这个问题也希望跟 @arthurkiller 作者大大确认下有无理解上的偏差。
From
buf := append(*w.buf, b...)
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&w.buf)), (unsafe.Pointer)(&buf))
To
w.lockBuf.Lock()
*(w.buf) = append(*w.buf, b...)
w.lockBuf.Unlock()
另,spinlock简单实现
type Locker struct {
lock uintptr
}
func (l *Locker) Lock() {
for !atomic.CompareAndSwapUintptr(&l.lock, 0, 1) {
runtime.Gosched()
}
}
func (l *Locker) Unlock() {
atomic.StoreUintptr(&l.lock, 0)
}
Hi,
Love this project and been using it for a while. Now I'm interested in using it for parquet
files and use this to auto rotate them. two things would be ideal:
ability to change the default .log
extension (
rollingwriter/rollingwriter.go
Line 145 in 965c6fc
it would be really nice to have a callback function when file rotation happens. this way, I can re-create the parquet
schema each time the file rolls over and the file consistency will remain the same. I guess the same could be said for rotating CSV files and other types that have some sort of header as well.
if this is something you're keen to add, happy to draft a PR for it.
建议把FileExtension参数去掉,用FileName表示完整的文件名,需要用到文件名的前缀或后缀,可通过截取的方式获取
建议RollingTimePattern参数进行封装简化,按分钟(方便测试)、按天或按小时切割
Thanks for the great package! We use it with zerolog, and it works well, however we met a bug recently, large amount of goroutines hang at writing log, all with the following stack trace:
(dlv) bt
0 0x0000000000439405 in runtime.gopark
at go/src/runtime/proc.go:337
1 0x000000000044a385 in runtime.goparkunlock
at go/src/runtime/proc.go:342
2 0x000000000044a385 in runtime.semacquire1
at go/src/runtime/sema.go:144
3 0x0000000000468967 in sync.runtime_SemacquireMutex
at go/src/runtime/sema.go:71
4 0x00000000004721e5 in sync.(*Mutex).lockSlow
at go/src/sync/mutex.go:138
5 0x0000000000b4ee0b in sync.(*Mutex).Lock
at go/src/sync/mutex.go:81
6 0x0000000000b4ee0b in github.com/arthurkiller/rollingwriter.(*LockedWriter).Write
at go-mod-cache/github.com/arthurkiller/[email protected]/writer.go:254
7 0x0000000000695703 in github.com/rs/zerolog.levelWriterAdapter.WriteLevel
at go-mod-cache/github.com/rs/[email protected]/writer.go:20
8 0x0000000000695703 in github.com/rs/zerolog.(*levelWriterAdapter).WriteLevel
at <autogenerated>:1
9 0x0000000000689625 in github.com/rs/zerolog.(*Event).write
at go-mod-cache/github.com/rs/[email protected]/event.go:76
10 0x0000000000689b1b in github.com/rs/zerolog.(*Event).msg
at go-mod-cache/github.com/rs/[email protected]/event.go:140
11 0x0000000000b39f70 in github.com/rs/zerolog.(*Event).Msg
at go-mod-cache/github.com/rs/[email protected]/event.go:106
...
I'm not familiar with the code, however, after a quick search, I found the following suspicious function:
func (w *LockedWriter) Write(b []byte) (n int, err error) {
w.Lock()
select {
case filename := <-w.fire:
if err := w.Reopen(filename); err != nil {
return 0, err // <-- here
}
default:
}
n, err = w.file.Write(b)
w.Unlock()
return
}
I don't know if there is a chance that the function returns at the comment line, leaves the lock acquired without release. Is this the root cause of the bug above? Any help is appreciated, thanks.
Any updates for v2.0?
跑了一个小例子:
config := rollingwriter.Config{
LogPath: "./log",
TimeTagFormat: "060102150405",
FileName: "test",
MaxRemain: 5,
RollingPolicy: rollingwriter.VolumeRolling,
RollingVolumeSize: "2K",
Asynchronous: true,
}
writer, err := rollingwriter.NewWriterFromConfig(&config)
if err != nil {
// 应该处理错误
panic(err)
}
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
for {
bf := []byte(fmt.Sprintf("rollingwriter test write gorounter time %v\n", time.Now()))
writer.Write(bf)
}
}()
wg.Wait()
../github.com/arthurkiller/rollingWriter/manager.go:41:10: assignment mismatch: 1 variable but m.cr.AddFunc returns 2 values
update clean up function
For FileName, consider using template strings, such as foo-${date}.log.gz
, which makes the file suffix reasonable.
It's strange to force a TimeTag spliced to the end of FileName.
Last, thanks for your components, which helped me a lot~
go env:1.13.1
go.mod:
require (
github.com/arthurkiller/rollingwriter v1.1.1
github.com/gin-contrib/logger v0.0.1
github.com/gin-gonic/gin v1.4.0
github.com/gobuffalo/envy v1.7.1
github.com/rs/zerolog v1.9.1
)
replace github.com/ugorji/go v1.1.4 => github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43
config := &rollingwriter.Config{
LogPath: "./storage/logs", //日志路径
TimeTagFormat: "20060102", //时间格式串
FileName: "test", //日志文件名
MaxRemain: 5, //配置日志最大存留数
// 目前有2中滚动策略: 按照时间滚动按照大小滚动
// - 时间滚动: 配置策略如同 crontable, 例如,每天0:0切分, 则配置 0 0 0 * * *
// - 大小滚动: 配置单个日志文件(未压缩)的滚动大小门限, 入1G, 500M
RollingPolicy: rollingwriter.TimeRolling, //配置滚动策略 norolling timerolling volumerolling
RollingTimePattern: "* * * * * *", //配置时间滚动策略
RollingVolumeSize: "2k", //配置截断文件下限大小
// writer 支持4种不同的 mode:
// 1. none 2. lock
// 3. async 4. buffer
// - 无保护的 writer: 不提供并发安全保障
// - lock 保护的 writer: 提供由 mutex 保护的并发安全保障
// - 异步 writer: 异步 write, 并发安全. 异步开启后忽略 Lock 选项
WriterMode: "lock",
// BufferWriterThershould in B
BufferWriterThershould: 8 * 1024 * 1024,
// Compress will compress log file with gzip
Compress: false,
}
writer ,_:= rollingwriter.NewWriterFromConfig(config)
logger = zerolog.New(writer).With().Timestamp().Logger()
router.GET("/ping", func(c *gin.Context) {
logger.Info().Msg("test")
c.String(200, "pong "+fmt.Sprint(time.Now().Unix()))
})
不断刷新 /ping
{"level":"info","time":"2019-10-30T19:29:14+08:00","message":"test"} //仅有此
test.log 里只有会一条数据,日志内容的时间戳会不断的更新,判断你写入文件的时候使用的是覆盖
现在好像是2选1
Hello,
I use the package on windows with go version 1.16. If I execute the following code, I get an error:
config := rollingwriter.NewDefaultConfig()
config.RollingPolicy = rollingwriter.VolumeRolling
config.RollingVolumeSize = "10M"
w, _ := rollingwriter.NewWriterFromConfig(&config)
for i := 0; i < 5000000; i++ {
_, err := w.Write([]byte(fmt.Sprintf("This is a test %d", i)))
if err != nil {
fmt.Println(err)
break
}
}
The printed error is: "rename log/log.log log/log.log.202104121537: The process cannot access the file because it is being used by another process"
If I look deeper into the returned error, the inner errors is golang.org/x/sys/windows.ERROR_SHARING_VIOLATION (32)
.
Is this a known error and is there a workaround?
Thank you and best regards,
Niklas
Hello everyone, I would like to know if it's possible to configure the rollingwritter
to rotate my log every 3 seconds or more...
I am using logrus
lib and I want to combine with this library.
I have been trying to do it in this way:
config := rollingwriter.Config{
LogPath: "./log",
TimeTagFormat: "200601021504",
FileName: "example",
MaxRemain: 0,
RollingPolicy: rollingwriter.TimeRolling,
RollingTimePattern: "*/3 * * * * *",
WriterMode: "lock",
Compress: false,
}
output, err := rollingwriter.NewWriterFromConfig(&config)
logrus.SetOutput(output)
Thanks for using CodeLingo!
We've got a fix for an issue:
unchecked type assertion
We've got a new dashboard where you can review and apply the fix: dash.codelingo.io.
Hope it's helpful and happy coding!
CodeLingo Team
在配合 zap 使用时,当要切分时,出现以下错误
write error: rename log/logs.log log/logs.log.201905111642: The process cannot access the file because it is being used by another process.
从提示上看,当要切分时,log 文件被占用了,不能重命名
我构建 zap logger 代码如下:
`
// 基本上与 demo 例子相同的配置进行构建的 RollingWriter 实例
rw, _ := newRollingWriter(&config)
// 基于 RollingWriter 构建 *zap.Logger
ws := zapcore.AddSync(rw)
core := zapcore.NewCore(
zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
ws,
zap.InfoLevel,
)
logger = zap.New(core)`
ps. rollingWriter 代码获取的是 master 分支
It'd be nice if you can create a tagged release for the latest fix. It will also avoid issues with new people using the library on windows. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.