Go Profiling: Your Performance Debugging Adventure

March 25, 2026

Introduction

Ever wondered why your shiny Go application is running slower than a sloth on sedatives? Or maybe you're just curious about what's really happening under the hood? Welcome to the exciting world of profiling! This guide will walk you through using pprof to spy on your Go services - tracking CPU usage, memory allocations, goroutines, and execution traces. Think of it as giving your code a fitness tracker that tells you exactly where it's wasting energy.


Setup

Adding pprof to Your Service

Let's get profiling up and running. First, we need to invite pprof to the party by importing it and spinning up a little HTTP server:

import (
    "fmt"
    "log"
    "net/http"
    _ "net/http/pprof"
    "runtime"
    "time"
)

// Start pprof server
go func() {
    port := "6060" // can be made configurable as well
    log.Printf("Starting pprof server on :%s", port)
    http.ListenAndServe(":"+port, nil)
}()

Running the Service

Now fire up your service with profiling enabled:

PPROF_PORT=6060 go run ./cmd/yourservice

Boom! Your app now has its own performance dashboard running.


CPU Profiling

CPU profiling is like having a time-tracking app for your code. It shows you exactly where your application is spending its precious CPU cycles - perfect for finding those sneaky performance bottlenecks.

The easiest way to dive in - straight to the visual goodness:

go tool pprof -http=:8081 'http://localhost:6060/debug/pprof/profile?seconds=30'

Command-Line Mode

For the terminal purists among us:

go tool pprof 'http://localhost:6060/debug/pprof/profile?seconds=30'

Then try commands like top10, web, list <funcName> to explore the data.


Memory Profiling

Memory profiling reveals your app's memory habits. Is it hoarding RAM like a digital packrat? Or is it being efficient? Let's find out what's happening in the heap.

Browser UI

Visualize your memory usage patterns:

go tool pprof -http=:8081 'http://localhost:6060/debug/pprof/heap'

In-Use vs Allocated Memory

Two different perspectives on memory:

# What's currently being used
go tool pprof -sample_index=inuse_space 'http://localhost:6060/debug/pprof/heap'

# Total memory allocated (including freed)
go tool pprof -sample_index=alloc_space 'http://localhost:6060/debug/pprof/heap'

Goroutine Analysis

Goroutines are Go's lightweight threads, but they can multiply like rabbits if you're not careful. This profile helps you spot leaks and understand your concurrency patterns.

go tool pprof 'http://localhost:6060/debug/pprof/goroutine'

Type web to see the goroutine stacks visualized - it's like a family tree for your concurrent code.


Memory Over Time

Want to see how your memory usage changes over time? It's like taking before-and-after photos, but for your heap.

# Snapshot the starting point
curl 'http://localhost:6060/debug/pprof/heap' > heap_start.prof

# Wait a bit... maybe grab a coffee...

# Take another snapshot
curl 'http://localhost:6060/debug/pprof/heap' > heap_end.prof

# Compare them
go tool pprof -base=heap_start.prof heap_end.prof

Execution Tracing

Execution traces are the ultimate deep dive - they capture exactly what your program is doing, function by function, over time. It's like having a play-by-play commentator for your code.

# Capture 10 seconds of execution drama
curl 'http://localhost:6060/debug/pprof/trace?seconds=10' > trace.out

# Analyze the results
go tool trace trace.out

Interactive Commands

Once you're in the pprof shell, these commands are your best friends:

  • top10 - Who's hogging the resources?
  • top10 -cum - Cumulative time view (great for finding bottlenecks)
  • list <func> - Show me the source code for this function
  • web - Open up that beautiful visual chart
  • png/pdf/svg - Export charts for your reports or presentations

Browser Interface

Want to see all available profiles at a glance? Head to:

open http://localhost:6060/debug/pprof/

It's like the control panel for your performance spaceship.


Memory Monitoring (Adhoc Stuff)

For continuous monitoring (because who wants to manually check all the time?), add this automatic memory logger:

func startMemoryMonitor(logger log.Logger, interval time.Duration) {
    go func() {
        ticker := time.NewTicker(interval)
        defer ticker.Stop()

        for range ticker.C {
            var m runtime.MemStats
            runtime.ReadMemStats(&m)
            logger.Infof("Memory - Alloc: %s, Sys: %s, NumGC: %d, Goroutines: %d",
                formatBytes(m.Alloc), formatBytes(m.Sys), m.NumGC, runtime.NumGoroutine())
        }
    }()
}

func formatBytes(b uint64) string {
    const unit = 1024
    if b < unit {
        return fmt.Sprintf("%d B", b)
    }
    div, exp := uint64(unit), 0
    for n := b / unit; n >= unit; n /= unit {
        div *= unit
        exp++
    }
    return fmt.Sprintf("%.2f %ciB", float64(b)/float64(div), "KMGTPE"[exp])
}

// Usage
startMemoryMonitor(logger, 30*time.Second)

Common Issues and Solutions


Environment Variables

PPROF_PORT=6060                    # Where your pprof server hangs out
MEM_MONITOR_INTERVAL_SECONDS=30    # How often to log memory stats (0 = never)

Tips

  • Profile like you mean it: Always test in an environment that matches production
  • Give it time: Capture profiles for at least 30 seconds - instant snapshots are like judging a book by its cover
  • Allocation hunting: Use -sample_index=alloc_space to find your biggest memory allocators
  • GC watch: Keep an eye on NumGC - high numbers mean your garbage collector is working overtime
  • Goroutine stability: Your goroutine count should level out, not keep growing like a bad habit
  • URL safety: Always quote those URLs with query params - don't let the shell play tricks on you
  • Visual vibes: Use -http=:8081 for instant browser charts, or go tool pprof <url> then web for the interactive experience

Conclusion

Profiling with pprof isn't just about fixing bugs - it's about understanding your code's personality. Start with CPU and memory profiling to catch the obvious issues, then pull out execution traces and goroutine analysis for the really tricky stuff. Make profiling part of your development routine, and you'll catch performance regressions before they become nightmares. Your future self (and your users) will thank you!