1. 08 Nov, 2019 27 commits
    • kaxapi's avatar
      encoding/asn1: fix unmarshalling SEQUENCE OF SET · 4692343c
      kaxapi authored
      Fixes #27426
      
      Change-Id: I34d4784658ce7b9e6130bae9717e80d0e9a290a2
      GitHub-Last-Rev: 6de610cdcef11832f131b84a0338b68af16b10da
      GitHub-Pull-Request: golang/go#30059
      Reviewed-on: https://go-review.googlesource.com/c/go/+/160819Reviewed-by: default avatarAgniva De Sarker <agniva.quicksilver@gmail.com>
      Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      Run-TryBot: Agniva De Sarker <agniva.quicksilver@gmail.com>
      4692343c
    • Chris Stockton's avatar
      net: halve the allocs in ParseCIDR by sharing slice backing · aff3aaa4
      Chris Stockton authored
      Share a slice backing between the host address, network ip and mask.
      Add tests to verify that each slice header has len==cap to prevent
      introducing new behavior into Go programs. This has a small tradeoff
      of allocating a larger slice backing when the address is invalid.
      Earlier error detection of invalid prefix length helps balance this
      cost and a new benchmark for ParseCIDR helps measure it.
      
      This yields a ~22% speedup for all nil err cidr tests:
      
        name               old time/op    new time/op    delta
        ParseCIDR/IPv4-24    9.17µs ± 6%    7.20µs ± 7%  -21.47%  (p=0.000 n=20+20)
        ParseCIDR/IPv6-24    9.02µs ± 6%    6.95µs ± 9%  -23.02%  (p=0.000 n=20+20)
        ParseCIDR/IPv4-24    1.51kB ± 0%    1.55kB ± 0%   +2.65%  (p=0.000 n=20+20)
        ParseCIDR/IPv6-24    1.51kB ± 0%    1.55kB ± 0%   +2.65%  (p=0.000 n=20+20)
        ParseCIDR/IPv4-24      68.0 ± 0%      34.0 ± 0%  -50.00%  (p=0.000 n=20+20)
        ParseCIDR/IPv6-24      68.0 ± 0%      34.0 ± 0%  -50.00%  (p=0.000 n=20+20)
      
      Including non-nil err cidr tests gains around 25%~:
      
        name               old time/op    new time/op    delta
        ParseCIDR/IPv4-24    11.8µs ±11%     8.9µs ± 8%  -24.88%  (p=0.000 n=20+20)
        ParseCIDR/IPv6-24    11.7µs ± 7%     8.7µs ± 5%  -25.93%  (p=0.000 n=20+20)
        ParseCIDR/IPv4-24    1.98kB ± 0%    2.00kB ± 0%   +1.21%  (p=0.000 n=20+20)
        ParseCIDR/IPv6-24    1.98kB ± 0%    2.00kB ± 0%   +1.21%  (p=0.000 n=20+20)
        ParseCIDR/IPv4-24      87.0 ± 0%      48.0 ± 0%  -44.83%  (p=0.000 n=20+20)
        ParseCIDR/IPv6-24      87.0 ± 0%      48.0 ± 0%  -44.83%  (p=0.000 n=20+20)
      
      Change-Id: I17f33c9049f7875b6ebdfde1f80b386a7aef9b94
      GitHub-Last-Rev: 0a031f44b458e2c6465d0e59fb4653e08c44a854
      GitHub-Pull-Request: golang/go#26948
      Reviewed-on: https://go-review.googlesource.com/c/go/+/129118Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      aff3aaa4
    • David Ndungu's avatar
      net/http: refactor test TestParseFormUnknownContentType · 1fd3f8bd
      David Ndungu authored
      Use names to better communicate when a test case fails.
      
      Change-Id: Id882783cb5e444b705443fbcdf612713f8a3b032
      Reviewed-on: https://go-review.googlesource.com/c/go/+/187823Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      1fd3f8bd
    • Ian Lance Taylor's avatar
      runtime: sleep a bit when waiting for running debug call goroutine · cfb13126
      Ian Lance Taylor authored
      Without this CL, one of the TestDebugCall tests would fail 1% to 2% of
      the time on the android-amd64-emu gomote. With this CL, I ran the
      tests for 1000 iterations with no failures.
      
      Fixes #32985
      
      Change-Id: I541268a2a0c10d0cd7604f0b2dbd15c1d18e5730
      Reviewed-on: https://go-review.googlesource.com/c/go/+/205248
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      Reviewed-by: default avatarBryan C. Mills <bcmills@google.com>
      cfb13126
    • Michael Anthony Knyszek's avatar
      runtime: add per-p page allocation cache · a2cd2bd5
      Michael Anthony Knyszek authored
      This change adds a per-p free page cache which the page allocator may
      allocate out of without a lock. The change also introduces a completely
      lockless page allocator fast path.
      
      Although the cache contains at most 64 pages (and usually less), the
      vast majority (85%+) of page allocations are exactly 1 page in size.
      
      Updates #35112.
      
      Change-Id: I170bf0a9375873e7e3230845eb1df7e5cf741b78
      Reviewed-on: https://go-review.googlesource.com/c/go/+/195701
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      a2cd2bd5
    • Michael Anthony Knyszek's avatar
      runtime: add page cache and tests · 81640ea3
      Michael Anthony Knyszek authored
      This change adds a page cache structure which owns a chunk of free pages
      at a given base address. It also adds code to allocate to this cache
      from the page allocator. Finally, it adds tests for both.
      
      Notably this change does not yet integrate the code into the runtime,
      just into runtime tests.
      
      Updates #35112.
      
      Change-Id: Ibe121498d5c3be40390fab58a3816295601670df
      Reviewed-on: https://go-review.googlesource.com/c/go/+/196643
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      81640ea3
    • Udalov Max's avatar
      encoding/binary: make Read return an error when data is not a pointer · c444ec30
      Udalov Max authored
      Make binary.Read return an error when passed `data` argument is not
      a pointer to a fixed-size value or a slice of fixed-size values.
      
      Fixes #32927
      
      Change-Id: I04f48be55fe9b0cc66c983d152407d0e42cbcd95
      Reviewed-on: https://go-review.googlesource.com/c/go/+/184957Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      c444ec30
    • Bryan C. Mills's avatar
      cmd/go/internal/lockedfile: add a unit-test for Transform · a84ac189
      Bryan C. Mills authored
      Updates #35425
      
      Change-Id: I9ca2251246ee2fa9bb7a335d5eff94d3c9f1f004
      Reviewed-on: https://go-review.googlesource.com/c/go/+/206143Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      a84ac189
    • Bryan C. Mills's avatar
      cmd/go/internal/modload: use lockedfile.Read for the initial read of the go.mod file · c5fac1ed
      Bryan C. Mills authored
      Updates #34634
      Fixes #35425
      
      Change-Id: I878a8d229b33dcde9e7d4dfd82ddf9815d38a465
      Reviewed-on: https://go-review.googlesource.com/c/go/+/206142
      Run-TryBot: Bryan C. Mills <bcmills@google.com>
      Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      c5fac1ed
    • Michael Anthony Knyszek's avatar
      runtime: add per-p mspan cache · 4517c02f
      Michael Anthony Knyszek authored
      This change adds a per-p mspan object cache similar to the sudog cache.
      Unfortunately this cache can't quite operate like the sudog cache, since
      it is used in contexts where write barriers are disallowed (i.e.
      allocation codepaths), so rather than managing an array and a slice,
      it's just an array and a length. A little bit more unsafe, but avoids
      any write barriers.
      
      The purpose of this change is to reduce the number of operations which
      require the heap lock in allocation, paving the way for a lockless fast
      path.
      
      Updates #35112.
      
      Change-Id: I32cfdcd8528fb7be985640e4f3a13cb98ffb7865
      Reviewed-on: https://go-review.googlesource.com/c/go/+/196642
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      4517c02f
    • Michael Anthony Knyszek's avatar
      runtime: rearrange mheap_.alloc* into allocSpan · a762221b
      Michael Anthony Knyszek authored
      This change combines the functionality of allocSpanLocked, allocManual,
      and alloc_m into a new method called allocSpan. While these methods'
      abstraction boundaries are OK when the heap lock is held throughout,
      they start to break down when we want finer-grained locking in the page
      allocator.
      
      allocSpan does just that, and only locks the heap when it absolutely has
      to. Piggy-backing off of work in previous CLs to make more of span
      initialization lockless, this change makes span initialization entirely
      lockless as part of the reorganization.
      
      Ultimately this change will enable us to add a lockless fast path to
      allocSpan.
      
      Updates #35112.
      
      Change-Id: I99875939d75fb4e958a67ac99e4a7cda44f06864
      Reviewed-on: https://go-review.googlesource.com/c/go/+/196641
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      a762221b
    • Michael Anthony Knyszek's avatar
      runtime: fix (*gcSweepBuf).block guarantees · a5a6f610
      Michael Anthony Knyszek authored
      Currently gcSweepBuf guarantees that push operations may be performed
      concurrently with each other and that block operations may be performed
      concurrently with push operations as well.
      
      Unfortunately, this isn't quite true. The existing code allows push
      operations to happen concurrently with each other, but block operations
      may return blocks with nil entries. The way this can happen is if two
      concurrent pushers grab a slot to push to, and the first one (the one
      with the earlier slot in the buffer) doesn't quite write a span value
      when the block is called. The existing code in block only checks if the
      very last value in the block is nil, when really an arbitrary number of
      the last few values in the block may or may not be nil.
      
      Today, this case can't actually happen because when push operations
      happen concurrently during a GC (which is the only time block is
      called), they only ever happen during an allocation with the heap lock
      held, effectively serializing them. A block operation may happen
      concurrently with one of these pushes, but its callers will never see a
      nil mspan. Outside of a GC, this isn't a problem because although push
      operations from allocations can run concurrently with push operations
      from sweeping, block operations will never run.
      
      In essence, the real concurrency guarantees provided by gcSweepBuf are
      that block operations may happen concurrently with push operations, but
      that push operations may not be concurrent with each other if there are
      any block operations.
      
      To fix this, and to prepare for push operations happening without the
      heap lock held in a future CL, we update the documentation for block to
      correctly state that there may be nil entries in the returned slice.
      While we're here, make the mspan writes into the buffer atomic to avoid
      a block user racing on a nil check, and document that the user should
      load mspan values from the returned slice atomically. Finally, we make
      all callers of block adhere to the new rules.
      
      We choose to allow nil values rather than filter them out because the
      only caller of block is markrootSpans, and if it catches a nil entry,
      then there wasn't anything to mark in there anyway since the span is
      just being created.
      
      Updates #35112.
      
      Change-Id: I6450aab15f51690d7a000ba5b3d529cf2ca5da1e
      Reviewed-on: https://go-review.googlesource.com/c/go/+/203318
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      a5a6f610
    • Michael Anthony Knyszek's avatar
      runtime: make more page sweeper operations atomic · dac936a4
      Michael Anthony Knyszek authored
      This change makes it so that allocation and free related page sweeper
      metadata operations (e.g. pageInUse and pagesInUse) are atomic rather
      than protected by the heap lock. This will help in reducing the length
      of the critical path with the heap lock held in future changes.
      
      Updates #35112.
      
      Change-Id: Ie82bff024204dd17c4c671af63350a7a41add354
      Reviewed-on: https://go-review.googlesource.com/c/go/+/196640
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      dac936a4
    • Cherry Zhang's avatar
      cmd/internal/obj/arm64: make function epilogue async-signal safe · 47232f0d
      Cherry Zhang authored
      When the frame size is large, we generate
      
      MOVD.P	0xf0(SP), LR
      ADD	$(framesize-0xf0), SP
      
      This is problematic: after the first instruction, we have a
      partial frame of size (framesize-0xf0). If we try to unwind the
      stack at this point, we'll try to read the LR from the stack at
      0(SP) (the new SP) as the frame size is not 0. But this slot does
      not contain a valid LR.
      
      Fix this by not changing SP in two instructions. Instead,
      generate
      
      MOVD	(SP), LR
      ADD	$framesize, SP
      
      This affects not only async preemption but also profiling. So we
      change the generated instructions, instead of marking unsafe
      point.
      
      Change-Id: I4e78c62d50ffc4acff70ccfbfec16a5ccae17f24
      Reviewed-on: https://go-review.googlesource.com/c/go/+/206057
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      47232f0d
    • Cherry Zhang's avatar
      runtime: add async preemption support on PPC64 · 374c2847
      Cherry Zhang authored
      This CL adds support of call injection and async preemption on
      PPC64.
      
      For the injected call to return to the preempted PC, we have to
      clobber either LR or CTR. For reasons mentioned in previous CLs,
      we choose CTR. Previous CLs have marked code sequences that use
      CTR async-nonpreemtible.
      
      Change-Id: Ia642b5f06a890dd52476f45023b2a830c522eee0
      Reviewed-on: https://go-review.googlesource.com/c/go/+/203824
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      374c2847
    • Michael Anthony Knyszek's avatar
      runtime: remove unnecessary large parameter to mheap_.alloc · 7f574e47
      Michael Anthony Knyszek authored
      mheap_.alloc currently accepts both a spanClass and a "large" parameter
      indicating whether the allocation is large. These are redundant, since
      spanClass.sizeclass() == 0 is an equivalent way to determine this and is
      already used in mheap_.alloc. There are no places in the runtime where
      the size class could be non-zero and large == true.
      
      Updates #35112.
      
      Change-Id: Ie66facf8f0faca6f4cd3d20a8ac4bc259e11823d
      Reviewed-on: https://go-review.googlesource.com/c/go/+/196639
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      7f574e47
    • Michael Anthony Knyszek's avatar
      runtime: define maximum supported physical page and huge page sizes · ffb5646f
      Michael Anthony Knyszek authored
      This change defines a maximum supported physical and huge page size in
      the runtime based on the new page allocator's implementation, and uses
      them where appropriate.
      
      Furthemore, if the system exceeds the maximum supported huge page
      size, we simply ignore it silently.
      
      It also fixes a huge-page-related test which is only triggered by a
      condition which is definitely wrong.
      
      Finally, it adds a few TODOs related to code clean-up and supporting
      larger huge page sizes.
      
      Updates #35112.
      Fixes #35431.
      
      Change-Id: Ie4348afb6bf047cce2c1433576d1514720d8230f
      Reviewed-on: https://go-review.googlesource.com/c/go/+/205937
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      Reviewed-by: default avatarCherry Zhang <cherryyz@google.com>
      ffb5646f
    • Bryan C. Mills's avatar
      cmd/go: delete flaky TestQEMUUserMode · a782472d
      Bryan C. Mills authored
      If QEMU user-mode is actually a supported configuration, then per
      http://golang.org/wiki/PortingPolicy it needs to have a builder
      running tests for all packages, not just a simple “hello world”
      program.
      
      Updates #1508
      Updates #13024
      Fixes #35457
      
      Change-Id: Ib6122b06ad1d265550a0e92131506266495893cc
      Reviewed-on: https://go-review.googlesource.com/c/go/+/206137
      Run-TryBot: Bryan C. Mills <bcmills@google.com>
      Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      a782472d
    • Michael Anthony Knyszek's avatar
      runtime: ensure heap memstats are updated atomically · ae4534e6
      Michael Anthony Knyszek authored
      For the most part, heap memstats are already updated atomically when
      passed down to OS-level memory functions (e.g. sysMap). Elsewhere,
      however, they're updated with the heap lock.
      
      In order to facilitate holding the heap lock for less time during
      allocation paths, this change more consistently makes the update of
      these statistics atomic by calling mSysStat{Inc,Dec} appropriately
      instead of simply adding or subtracting. It also ensures these values
      are loaded atomically.
      
      Furthermore, an undocumented but safe update condition for these
      memstats is during STW, at which point using atomics is unnecessary.
      This change also documents this condition in mstats.go.
      
      Updates #35112.
      
      Change-Id: I87d0b6c27b98c88099acd2563ea23f8da1239b66
      Reviewed-on: https://go-review.googlesource.com/c/go/+/196638
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      ae4534e6
    • Michael Anthony Knyszek's avatar
      runtime: remove useless heap_objects accounting · 814c5058
      Michael Anthony Knyszek authored
      This change removes useless additional heap_objects accounting for large
      objects. heap_objects is computed from scratch at ReadMemStats time
      (which stops the world) by using nlargealloc and nlargefree, so mutating
      heap_objects turns out to be pointless.
      
      As a result, the "large" parameter on "mheap_.freeSpan" is no longer
      necessary and so this change cleans that up too.
      
      Change-Id: I7d6b486d9b57c018e3db46221d81b55fe4c1b021
      Reviewed-on: https://go-review.googlesource.com/c/go/+/196637
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      814c5058
    • Michael Anthony Knyszek's avatar
      runtime: make allocNeedsZero lock-free · 4208dbef
      Michael Anthony Knyszek authored
      In preparation for a lockless fast path in the page allocator, this
      change makes it so that checking if an allocation needs to be zeroed may
      be done atomically.
      
      Unfortunately, this means there is a CAS-loop to ensure monotonicity of
      the zeroedBase value in heapArena. This CAS-loop exits if an allocator
      acquiring memory further on in the arena wins or if it succeeds. The
      CAS-loop should have a relatively small amount of contention because of
      this monotonicity, though it would be ideal if we could just have
      CAS-ers with the greatest value always win. The CAS-loop is unnecessary
      in the steady-state, but should bring some start-up performance gains as
      it's likely cheaper than the additional zeroing required, especially for
      large allocations.
      
      For very large allocations that span arenas, the CAS-loop should be
      completely uncontended for most of the arenas it touches, it may only
      encounter contention on the first and last arena.
      
      Updates #35112.
      
      Change-Id: If3d19198b33f1b1387b71e1ce5902d39a5c0f98e
      Reviewed-on: https://go-review.googlesource.com/c/go/+/203859
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      4208dbef
    • Bryan C. Mills's avatar
      runtime: skip TestPingPongHog on builders · 52aebe8d
      Bryan C. Mills authored
      This test is failing consistently in the longtest builders,
      potentially masking regressions in other packages.
      
      Updates #35271
      
      Change-Id: Idc03171c0109b5c8d4913e0af2078c1115666897
      Reviewed-on: https://go-review.googlesource.com/c/go/+/206098Reviewed-by: default avatarCarlos Amedee <carlos@golang.org>
      52aebe8d
    • Hana (Hyang-Ah) Kim's avatar
      runtime/pprof: delete unused locForPC · 45b4ed75
      Hana (Hyang-Ah) Kim authored
      Change-Id: Ie4754fefba6057b1cf558d0096fe0e83355f8eff
      Reviewed-on: https://go-review.googlesource.com/c/go/+/205098
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      45b4ed75
    • Hana (Hyang-Ah) Kim's avatar
      runtime/pprof: correct inlined function location encoding for non-CPU profiles · e25de44e
      Hana (Hyang-Ah) Kim authored
      Change-Id: Id270a3477bf1a581755c4311eb12f990aa2260b5
      Reviewed-on: https://go-review.googlesource.com/c/go/+/205097
      Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      e25de44e
    • Ian Lance Taylor's avatar
      Revert "math/cmplx: handle special cases" · f5e89c22
      Ian Lance Taylor authored
      This reverts CL 169501.
      
      Reason for revert: The new tests fail at least on s390x and MIPS. This is likely a minor bug in the compiler or runtime. But this point in the release cycle is not the time to debug these details, which are unlikely to be new. Let's try again for 1.15.
      
      Updates #29320
      Fixes #35443
      
      Change-Id: I2218b2083f8974b57d528e3742524393fc72b355
      Reviewed-on: https://go-review.googlesource.com/c/go/+/206037
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      Reviewed-by: default avatarBryan C. Mills <bcmills@google.com>
      Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      f5e89c22
    • Hana (Hyang-Ah) Kim's avatar
      runtime/pprof: correctly encode inlined functions in CPU profile · e038c7e4
      Hana (Hyang-Ah) Kim authored
      The pprof profile proto message expects inlined functions of a PC
      to be encoded in one Location entry using multiple Line entries.
      https://github.com/google/pprof/blob/5e96527/proto/profile.proto#L177-L184
      
      runtime/pprof has encoded the symbolization information by creating
      a Location for each PC found in the stack trace and including info
      from all the frames expanded from the PC using runtime.CallersFrames.
      This assumes inlined functions are represented as a single PC in the
      stack trace. (https://go-review.googlesource.com/41256)
      
      In the recent years, behavior around inlining and the traceback
      changed significantly (e.g. https://golang.org/cl/152537,
      https://golang.org/issue/29582, and many changes). Now the PCs
      in the stack trace represent user frames even including inline
      marks. As a result, the profile proto started to allocate a Location
      entry for each user frame, lose the inline information (so pprof
      presented incorrect results when inlined functions are involved),
      and confuse the pprof tool with those PCs made up for inline marks.
      
      This CL attempts to detect inlined call frames from the stack traces
      of CPU profiles, and organize the Location information as intended.
      Currently, runtime does not provide a reliable and convenient way to
      detect inlined call frames and expand user frames from a given externally
      recognizable PCs. So we use heuristics to recover the groups
        - inlined call frames have nil Func field
        - inlined call frames will have the same Entry point
        - but must be careful with recursive functions that have the
          same Entry point by definition, and non-Go functions that
          may lack most of the fields of Frame.
      
      The followup CL will address the issue with other profile types.
      
      Change-Id: I0c9667ab016a3e898d648f31c3f82d84c15398db
      Reviewed-on: https://go-review.googlesource.com/c/go/+/204636Reviewed-by: default avatarKeith Randall <khr@golang.org>
      e038c7e4
    • Michael Anthony Knyszek's avatar
      runtime: remove old page allocator · 33dfd352
      Michael Anthony Knyszek authored
      This change removes the old page allocator from the runtime.
      
      Updates #35112.
      
      Change-Id: Ib20e1c030f869b6318cd6f4288a9befdbae1b771
      Reviewed-on: https://go-review.googlesource.com/c/go/+/195700
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      33dfd352
  2. 07 Nov, 2019 13 commits