1 问题所在
对于切片扩容机制,我最初的理解就是以下:
如果切片的容量小于1024,则扩为原来容量的两倍,如果切片容量超过1024,则扩为原来的1.25倍。
func main() {
a := make([]int, 3) //a为3
print("a before append:")
slicePrint(a)
a = append(a, 1)
print("a after append:")
slicePrint(a)
b := make([]int, 100) //b为100
print("b before append:")
slicePrint(b)
b = append(b, 1)
print("b after append:")
slicePrint(b)
c := make([]int, 1000) //c为1000
print("c before append:")
slicePrint(c)
c = append(c, 1)
print("c after append:")
slicePrint(c)
}
func slicePrint(s []int) {
fmt.Printf("len=%d cap=%d \n", len(s), cap(s))
}
输出的结果为:
a before append:len=3 cap=3
a after append:len=4 cap=6
b before append:len=100 cap=100
b after append:len=101 cap=224
c before append:len=1000 cap=1000
c after append:len=1001 cap=1536
可以看到的是,a切片的cap确实是变成了2倍,而b、c切片并没有按照期望那样去扩充为2倍或者是1.25倍,故可以推断出在go 1.20版本下,开头那段盛行的话是错误的。于是,我们从源码去看看到底是怎么扩容的。
2 源码分析
根据网上的信息,append()函数在扩容时调用的是…/src/runtime/slice.go文件中的growslice()函数,那么我们来看一下这个函数。这个函数很长,我们分为几个部分来看。
2.1 注释部分
在这里,我截取了注释中的参数和返回值部分,做一个简单说明。
// growslice allocates new backing store for a slice.
//
// arguments:
//
// oldPtr = pointer to the slice's backing array
// newLen = new length (= oldLen + num)
// oldCap = original slice's capacity.
// num = number of elements being added
// et = element type
//
// return values:
//
// newPtr = pointer to the new backing store
// newLen = same value as the argument
// newCap = capacity of the new backing store
可以看到的是,函数的参数有5个,分别是指针类型的oldPtr,指向原切片的数据地址;int类型的newLen,oldCap以及num,分别代表着新的切片长度,原切片的容量以及加入切片的元素个数;最后是一个类型的指针et,这个是切片中元素的数据类型,例如int类型与int32类型会有区别。函数的返回值有3个,分别是newPtr即新的地址,newLen新的长度,newCap新的容量。
另外,也可以从剩下的注释中得到一些其他的信息,所以源码中的注释有必要看一下。
2.2growSlice()函数主体部分
以下是go 1.20版本中的完整的growSlice()函数
func growslice(oldPtr unsafe.Pointer, newLen, oldCap, num int, et *_type) slice {
oldLen := newLen - num
if raceenabled {
callerpc := getcallerpc()
racereadrangepc(oldPtr, uintptr(oldLen*int(et.size)), callerpc, abi.FuncPCABIInternal(growslice))
}
if msanenabled {
msanread(oldPtr, uintptr(oldLen*int(et.size)))
}
if asanenabled {
asanread(oldPtr, uintptr(oldLen*int(et.size)))
}
if newLen < 0 {
panic(errorString("growslice: len out of range"))
}
if et.size == 0 {
// append should not create a slice with nil pointer but non-zero len.
// We assume that append doesn't need to preserve oldPtr in this case.
return slice{unsafe.Pointer(&zerobase), newLen, newLen}
}
newcap := oldCap
doublecap := newcap + newcap
if newLen > doublecap {
newcap = newLen
} else {
const threshold = 256
if oldCap < threshold {
newcap = doublecap
} else {
// Check 0 < newcap to detect overflow
// and prevent an infinite loop.
for 0 < newcap && newcap < newLen {
// Transition from growing 2x for small slices
// to growing 1.25x for large slices. This formula
// gives a smooth-ish transition between the two.
newcap += (newcap + 3*threshold) / 4
}
// Set newcap to the requested cap when
// the newcap calculation overflowed.
if newcap <= 0 {
newcap = newLen
}
}
}
var overflow bool
var lenmem, newlenmem, capmem uintptr
// Specialize for common values of et.size.
// For 1 we don't need any division/multiplication.
// For goarch.PtrSize, compiler will optimize division/multiplication into a shift by a constant.
// For powers of 2, use a variable shift.
switch {
case et.size == 1:
lenmem = uintptr(oldLen)
newlenmem = uintptr(newLen)
capmem = roundupsize(uintptr(newcap))
overflow = uintptr(newcap) > maxAlloc
newcap = int(capmem)
case et.size == goarch.PtrSize:
lenmem = uintptr(oldLen) * goarch.PtrSize
newlenmem = uintptr(newLen) * goarch.PtrSize
capmem = roundupsize(uintptr(newcap) * goarch.PtrSize)
overflow = uintptr(newcap) > maxAlloc/goarch.PtrSize
newcap = int(capmem / goarch.PtrSize)
case isPowerOfTwo(et.size):
var shift uintptr
if goarch.PtrSize == 8 {
// Mask shift for better code generation.
shift = uintptr(sys.TrailingZeros64(uint64(et.size))) & 63
} else {
shift = uintptr(sys.TrailingZeros32(uint32(et.size))) & 31
}
lenmem = uintptr(oldLen) << shift
newlenmem = uintptr(newLen) << shift
capmem = roundupsize(uintptr(newcap) << shift)
overflow = uintptr(newcap) > (maxAlloc >> shift)
newcap = int(capmem >> shift)
capmem = uintptr(newcap) << shift
default:
lenmem = uintptr(oldLen) * et.size
newlenmem = uintptr(newLen) * et.size
capmem, overflow = math.MulUintptr(et.size, uintptr(newcap))
capmem = roundupsize(capmem)
newcap = int(capmem / et.size)
capmem = uintptr(newcap) * et.size
}
// The check of overflow in addition to capmem > maxAlloc is needed
// to prevent an overflow which can be used to trigger a segfault
// on 32bit architectures with this example program:
//
// type T [1<<27 + 1]int64
//
// var d T
// var s []T
//
// func main() {
// s = append(s, d, d, d, d)
// print(len(s), "\n")
// }
if overflow || capmem > maxAlloc {
panic(errorString("growslice: len out of range"))
}
var p unsafe.Pointer
if et.ptrdata == 0 {
p = mallocgc(capmem, nil, false)
// The append() that calls growslice is going to overwrite from oldLen to newLen.
// Only clear the part that will not be overwritten.
// The reflect_growslice() that calls growslice will manually clear
// the region not cleared here.
memclrNoHeapPointers(add(p, newlenmem), capmem-newlenmem)
} else {
// Note: can't use rawmem (which avoids zeroing of memory), because then GC can scan uninitialized memory.
p = mallocgc(capmem, et, true)
if lenmem > 0 && writeBarrier.enabled {
// Only shade the pointers in oldPtr since we know the destination slice p
// only contains nil pointers because it has been cleared during alloc.
bulkBarrierPreWriteSrcOnly(uintptr(p), uintptr(oldPtr), lenmem-et.size+et.ptrdata)
}
}
memmove(p, oldPtr, lenmem)
return slice{p, newLen, newcap}
}
2.2关于cap的代码段1
在这么长的一段代码中,我们主要关注以下几个关于cap的片段:
newcap := oldCap
doublecap := newcap + newcap
if newLen > doublecap {
newcap = newLen
} else {
const threshold = 256
if oldCap < threshold {
newcap = doublecap
} else {
// Check 0 < newcap to detect overflow
// and prevent an infinite loop.
for 0 < newcap && newcap < newLen {
// Transition from growing 2x for small slices
// to growing 1.25x for large slices. This formula
// gives a smooth-ish transition between the two.
newcap += (newcap + 3*threshold) / 4
}
// Set newcap to the requested cap when
// the newcap calculation overflowed.
if newcap <= 0 {
newcap = newLen
}
}
}
可以看到的是:
首先,newLen = oldLen+num,让newcap等于oldCap,doublecap等于2*oldCap
- 如果newLen大于两倍容量,则直接令新的容量为新的长度
- 如果新的长度小于两倍容量,则会有一个阈值threshold = 256;如果oldCap<threshold,就令newCap为oldCap的两倍,不然的话就不断令newcap += (newcap + 3*threshold) / 4直到newCap大于newLen
也就是说,对于新增元素的数目小于两倍原来容量的,且原容量小于256的,直接新的容量即为原容量的两倍;如果原容量大于256,则新的容量为原容量加上1/4原容量再加上192。对于新长度就大于两倍容量的,则直接令新的容量为新的长度。
根据上面这个小结论,我们来推测以下之前程序的结果,也就是下面这个
func main() {
a := make([]int, 3) //a为3
print("a before append:")
slicePrint(a)
a = append(a, 1)
print("a after append:")
slicePrint(a)
b := make([]int, 100) //b为100
print("b before append:")
slicePrint(b)
b = append(b, 1)
print("b after append:")
slicePrint(b)
c := make([]int, 1000) //c为1000
print("c before append:")
slicePrint(c)
c = append(c, 1)
print("c after append:")
slicePrint(c)
}
func slicePrint(s []int) {
fmt.Printf("len=%d cap=%d \n", len(s), cap(s))
}
在append操作前,a、b、c切片的容量分别为3、100、1000,进行append操作,且只加一个元素,这样根据上面的结论,他们扩容之后的容量应该分别为6(32)、200(1002)、1442(1000 + 0.25*(1000+256))
而实际的结果为6、224、1546,很明显不对,因此,扩容并没有这么简单,并不是只是单纯数字上的更改,于是继续往下看。
2.3 关于cap的代码段2
var lenmem, newlenmem, capmem uintptr
// Specialize for common values of et.size.
// For 1 we don't need any division/multiplication.
// For goarch.PtrSize, compiler will optimize division/multiplication into a shift by a constant.
// For powers of 2, use a variable shift.
switch {
case et.size == 1:
lenmem = uintptr(oldLen)
newlenmem = uintptr(newLen)
capmem = roundupsize(uintptr(newcap))
overflow = uintptr(newcap) > maxAlloc
newcap = int(capmem)
case et.size == goarch.PtrSize:
lenmem = uintptr(oldLen) * goarch.PtrSize
newlenmem = uintptr(newLen) * goarch.PtrSize
capmem = roundupsize(uintptr(newcap) * goarch.PtrSize)
overflow = uintptr(newcap) > maxAlloc/goarch.PtrSize
newcap = int(capmem / goarch.PtrSize)
case isPowerOfTwo(et.size):
var shift uintptr
if goarch.PtrSize == 8 {
// Mask shift for better code generation.
shift = uintptr(sys.TrailingZeros64(uint64(et.size))) & 63
} else {
shift = uintptr(sys.TrailingZeros32(uint32(et.size))) & 31
}
lenmem = uintptr(oldLen) << shift
newlenmem = uintptr(newLen) << shift
capmem = roundupsize(uintptr(newcap) << shift)
overflow = uintptr(newcap) > (maxAlloc >> shift)
newcap = int(capmem >> shift)
capmem = uintptr(newcap) << shift
default:
lenmem = uintptr(oldLen) * et.size
newlenmem = uintptr(newLen) * et.size
capmem, overflow = math.MulUintptr(et.size, uintptr(newcap))
capmem = roundupsize(capmem)
newcap = int(capmem / et.size)
capmem = uintptr(newcap) * et.size
}
在这段代码中,我们发现,newCap在这段代码中会变化,那么如何变化呢?在注释中,可以发现,扩容机制对于不同的元素类型有着不同的操作:对于类型只占一个字节的,比如byte类型,不会做任何的偏移处理;而对于类型所占空间为goarch.Ptrsize的(也就是8个字节的,比如int,int64),会做一个基于常量的偏移处理;对于所占空间为2的幂的,会做一个基于变量的偏移处理。
在这里,我们分析其中一个case,即case et.size == goarch.PtrSize这个入口。我们以100为例子,也就是oldLen=100,newLen=101,oldCap=100,之前计算得到的newCap = 200来看一下如何变化的。
- lenmem = uintptr(oldLen) * goarch.PtrSize = 100*8 = 800
- newlenmem = uintptr(newLen) * goarch.PtrSize = 101*8 = 808
- capmem = roundupsize(uintptr(newcap) * goarch.PtrSize)=roundupsize(200*8)=roundupsize(1600)
- newCap = int(capmem / goarch.PtrSize) = int(roundupsize(1600)/8)
那么,现在需要找到roundupsize这个函数。
// Returns size of the memory block that mallocgc will allocate if you ask for the size.
func roundupsize(size uintptr) uintptr {
if size < _MaxSmallSize {
if size <= smallSizeMax-8 {
return uintptr(class_to_size[size_to_class8[divRoundUp(size, smallSizeDiv)]])
} else {
return uintptr(class_to_size[size_to_class128[divRoundUp(size-smallSizeMax, largeSizeDiv)]])
}
}
if size+_PageSize < size {
return size
}
return alignUp(size, _PageSize)
}
这个函数意思是这样的,如果我传入的参数小于等于1024-8=1016的话,返回class_to_size数组中的一个值;如果传入的参数在1016到32768之间的话,返回另一个值,这里我们的参数为1600,我们走下面一步。
divRoundUp(size-smallSizeMax, largeSizeDiv)
//divRoundUp(1600-1024,128)
func divRoundUp(n, a uintptr) uintptr {
// a is generally a power of two. This will get inlined and
// the compiler will optimize the division.
return (n + a - 1) / a
}
在这里,largeSizeDiv,smallSizeMax都是常量,值分别为128和1024,故这里divRoundUp处理的结果为5,size_to_class128[5] = 37,class_to_size[37] = 1792,故roundupsize(1600)=1792,而capnew = roundupsize(1600) / 8= 1792/8 = 224
至此,100容量的切片扩充一个元素得到的新容量为224,求出了需要的答案。至于其他的数,也可以按照这个原理去推测,在此便不再赘述。
3 总结
切片的扩容并没有想象的那么简单,在我看来,切片的扩容分为两阶段:
- 第一阶段:先通过数字计算,计算出capnew,也就是2.2节所述内容
- 第二阶段:通过第一阶段的capnew,再根据具体的元素类型以及最后需要满足的切片长度,来选择具体扩充的容量。在这一部分中,会对较大的扩容以及较小的扩容做区分,进行查表等一系列操作,最终返回最后的容量。
4 思考
行文至此,我一直在思考为什么go语言需要如此复杂的切片扩容原理,到底层,我发现go语言对于扩容确实是划分了一些区间,比如最初的threshold 256, 之后的1024,32768等等,对于不同的区间会做不同的处理,通过这样来加快程序运行的速度。
当然,其中也存在着许多我暂时还未明白的道理,如果读者有独特的见解,也希望你在评论中提出,不胜感激。