/posts/awesome-go-packages
Awesome Go Packages
TL;DR
An opinionated guide to Go packages that trade novelty for reliability -- which ones to reach for, which ones to skip, and the seven selection rules that save you from 2am dependency regret.
Guide
You start a new Go project and immediately open a browser tab. "Best Go web framework 2026." Then "Go ORM vs raw SQL." Then "Go logging structured." Three hours later you have twelve tabs, four conflicting Reddit threads, and zero lines of application code. The answer was boring all along. The packages that survive production are the ones closest to the standard library -- small API surface, net/http compatible, no magic. ServeMux first. sqlx over GORM. slog over everything. Pick boring, pick proven, pick what lets you type go build without reading a changelog first.
Five Searches Before the First Line of Code
Open a terminal. Type go mod init. Now open a browser. You will search for a web framework, a router, a database layer, a logger, and a config loader -- in that order, every single time. The standard library handles HTTP serving, JSON encoding, testing, and cryptography. It does not handle struct scanning, structured logging (before 1.21), or environment variable parsing.
The gap is smaller than it used to be. Go 1.22 added method-based routing to net/http.ServeMux -- you can now write mux.HandleFunc("GET /users/{id}", handler) with zero dependencies. Go 1.21 shipped slog for structured logging. The standard library is absorbing its most-used third-party patterns, one release at a time.
net/http Got Good Enough -- Most Frameworks Fight a War That Ended
Add a custom net/http middleware to a Fiber application. It will not compile. Fiber uses Fasthttp under the hood, which means a different Request type, a different Response type, and a different handler signature. Every net/http middleware you have -- rate limiting, tracing, authentication -- needs to be rewritten or replaced with a Fiber-specific equivalent. That is the cost of choosing a framework that left the standard library behind.
Gin Ships Fast, Fiber Breaks net/http
Gin is the default choice for shipping an API this week. It uses reflection, wraps the request context in its own type, and Go purists hate it. It is also in half the production APIs you use daily. Reach for it when you want Laravel-energy scaffolding and your team already knows it. Skip it when you need WebSocket support or you want your middleware to work outside Gin.
Echo is what Gin should have been. Better documentation, a cleaner middleware story, actual WebSocket support. The v5 release brought a JWT middleware that configures in under 10 lines. It earns its spot when you want Gin's productivity with fewer footguns. It loses its case when your team is already on Gin and a migration buys you nothing.
Fiber is built on Fasthttp for raw speed. It justifies itself when you are building a proxy or gateway where microseconds matter and you have benchmarks proving it. For everything else, the Fasthttp incompatibility costs more than any benchmark improvement.
Chi Implements net/http Natively
Chi is a router, not a framework. It implements net/http natively. Every standard middleware works. Route groups compose cleanly. There is no code generation, no struct tags, no reflection.
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Route("/api", func(r chi.Router) {
r.Use(authMiddleware)
r.Get("/users/{id}", getUser)
})
// r is an http.Handler. Pass it anywhere net/http goes.
Chi has shipped zero breaking changes since v5. It does one thing -- routing -- and it does it with the types the standard library already gave you.
Start with
net/http.ServeMux(Go 1.22+). It handles method routing and path parameters out of the box. Add Chi only if you need route grouping, middleware composition on subrouters, or regex patterns. Most services never need it.
Gorilla/Mux is in maintenance mode. It served Go developers well for a decade. If you are still using it, plan a migration to Chi or plain ServeMux -- same patterns, better performance, active development.
HttpRouter proves a point about zero-allocation routing. It also has no middleware support, no regex routes, and no route groups. The performance gain matters in benchmarks, not in your API that spends 99% of its time waiting on database queries.
database/sql Is The Tax -- sqlx Removes It
Write your tenth rows.Scan(&u.ID, &u.Name, &u.Email, &u.CreatedAt, &u.UpdatedAt, &u.Status, &u.Role) call. Count the arguments. Count the columns. Add a column to the table. Now find every Scan call that needs updating. Miss one. Watch it fail at runtime, not compile time.
sqlx Removes the Scan Tax
sqlx extends database/sql with struct scanning, named parameters, and In() clause expansion. It is not an ORM. It is database/sql with the rough edges filed off.
var users []User
err := db.Select(&users, "SELECT * FROM users WHERE status = ?", "active")
// StructScan maps columns to struct fields by name. No manual Scan.
sqlx should be your second go get after your router. Everything else in this section is optional. sqlx is not.
sqlc Catches Bad SQL Before You Ship It
Rename a column in Postgres. Your sqlx code still compiles. It fails at runtime when the struct field no longer matches. sqlx removed the manual scan tax. It did not remove the "deploy to find out" tax.
sqlc works in the opposite direction. You write SQL queries in .sql files. sqlc parses them using Postgres's actual query parser -- the same pg_query_go library Postgres itself uses -- and generates type-safe Go code at compile time. A renamed column breaks the build, not production.
-- query.sql
-- name: GetActiveUsers :many
SELECT id, name, email FROM users WHERE status = $1;
// Generated. Type-safe. No reflection.
func (q *Queries) GetActiveUsers(ctx context.Context, status string) ([]User, error)
sqlc pairs with pgx for Postgres and also supports MySQL and SQLite. At scale it benchmarks slightly faster than sqlx -- no runtime reflection, no struct tag parsing. The cost is a codegen step in your build and initial setup overhead. Dynamic parameter counts -- WHERE id IN (?) with a variable-length list -- require workarounds that sqlx handles natively with In().
Use sqlc when your service is DB-heavy and compile-time SQL safety matters. Use sqlx when you want zero codegen, quick integration, or your queries are simple enough that runtime scanning is not a risk you think about.
Squirrel for Dynamic WHERE, sqlc for Stable Schemas
Squirrel builds SQL queries programmatically. Useful when your WHERE clauses are dynamic -- user-facing search, optional filters, conditional joins. Pointless when your queries are static. Just write the SQL.
users := sq.Select("*").From("users").
Where(sq.Eq{"status": "active"}).
Where(sq.Gt{"age": 18})
SQLBoiler generates Go code from your database schema. Database-first design. The generated code is type-safe, readable, and performs well. The tradeoff: every schema change requires a regeneration step. SQLBoiler entered maintenance mode in November 2024 -- no new features, but the maintainer still uses it in production and accepts community bug fixes and compatibility patches. It works. It is not going anywhere new. For new projects, evaluate sqlc instead. The spiritual successor is Bob, created by a SQLBoiler maintainer, if you want the database-first codegen approach with active feature development.
GORM is the most popular Go ORM. V2 improved significantly over V1. It is still GORM -- implicit behaviors, callbacks that fire when you do not expect them, query generation you cannot predict by reading the code. Use it when you are coming from Rails or Django and need familiar patterns on day one. Drop it when you care about understanding exactly what SQL your application executes.
GORM's implicit behaviors (auto-migrations, soft deletes, callbacks) can surprise you in production. If you use GORM, read its documentation on hooks and session modes before your first deploy.
Ent is Facebook's graph-based ORM for Go. Schema defined in Go code, generates type-safe CRUD operations and graph traversals. Heavier than SQLBoiler but more expressive for relational queries. Teams moving off GORM in 2026 increasingly choose Ent for complex domain models. Evaluate it when your domain model has deep relationships and you want graph traversals without hand-rolled joins.
golang-migrate Runs Everywhere -- Nothing Else Covers More
golang-migrate supports every database, every source format, and embedding migrations in binaries. It is the standard. No other migration tool covers as many databases and source formats.
slog Ended The Logging Debate
Call log.Println("user login", userID) in production. Check your log aggregator. The output is an unstructured string with no fields, no levels, no machine-parseable structure. Your monitoring pipeline cannot alert on it, your dashboards cannot filter it, and your on-call engineer is grepping by hand at 3am.
slog shipped in Go 1.21. It is structured, leveled, and in the standard library. For new projects, use slog.
slog.Info("user login", "user_id", userID, "ip", clientIP)
// Output: {"time":"...","level":"INFO","msg":"user login","user_id":"u123","ip":"10.0.0.1"}
Zerolog achieves near-Zap performance with a builder-style API that reads like English. If slog's performance is not enough -- and measure before you claim it is not -- Zerolog is the first alternative to evaluate.
Zap is the fastest structured logger. You will write five times more code than with slog or Zerolog. The API splits into a "sugared" logger (convenient, slower) and a raw logger (fast, verbose). Reach for Zap when you have profiler output showing logging as a bottleneck, not before.
slog covers 90% of Go projects. If you already use Zerolog or Zap and they work, do not migrate -- there is no prize for logging framework churn.
Configuration Has Three Moving Parts -- Most Services Use One
Your Viper configuration reads from 6 sources: environment variables, a YAML file, a JSON file, etcd, Consul, and command-line flags. Your application uses 3 environment variables: PORT, DATABASE_URL, and API_KEY.
envconfig is the right tool for 12-factor applications. The entire API is a struct tag and a function call.
type Config struct {
Port int `envconfig:"PORT" default:"8080"`
DatabaseURL string `envconfig:"DATABASE_URL" required:"true"`
APIKey string `envconfig:"API_KEY" required:"true"`
}
Viper is the right tool when you genuinely need multiple configuration sources, live-reloading, or nested config hierarchies. That describes about 10% of Go services. Viper is 10,000 lines of code. Make sure you need them.
The decision: if your config comes from environment variables, use envconfig. If your config comes from files with overrides and hot-reloading, use Viper. If you are not sure, you want envconfig.
HTTP Clients: net/http.Client Does the Job
You go get an HTTP client library because it has retries and backoff built in. Three months later you discover it swallows context cancellation, logs at a level you cannot control, and wraps errors in types your error-handling code does not understand.
net/http.Client with context.Context and a small retry wrapper handles 90% of service-to-service communication. Set timeouts explicitly. Use http.NewRequestWithContext for every call. Wrap retries in a 30-line helper function your team controls.
Resty adds method chaining, automatic retries, and request/response middleware. It earns its spot when you are calling 5+ external APIs with different auth schemes and you want a consistent interface. For one or two internal services, it adds more API surface than it saves.
The boring choice is net/http.Client. You already have it.
Rip One Out, Rewrite Fifty Files
Your team adopted an HTTP framework six months ago. Now the framework's middleware does not compose with your tracing library. You look at the migration path: 50 handler files, 12 middleware adapters, every integration test. The framework works fine. You are stuck with it anyway. That is the switching cost.
These packages change application structure, not just behavior. Choose carefully -- a framework migration touches every handler, middleware, and integration test in the repo.
Cobra for Complex CLIs, urfave/cli for Everything Else
Cobra generates CLI scaffolding with subcommands, flags, and help text. kubectl, hugo, and gh all use Cobra. Run cobra-cli init and it creates 4 files. You needed one. The boilerplate feels un-Go-like, but the pattern scales to complex CLIs with dozens of subcommands.
urfave/cli is a library, not a generator. You write more code but understand all of it. Better for CLIs where you want explicit control over structure.
Testify Still Wins, but Mock Less
Testify provides assertions and mocking. The Go team says you do not need assertion libraries. They are wrong. assert.Equal(t, expected, actual) communicates intent better than a four-line if-block with t.Errorf. Use it.
GoMock generates mock implementations from interfaces. Now part of the Go project. The generated code is ugly. It works. That said, the Go community's enthusiasm for generated mocks has cooled. Fuzzing adoption rose. Subtest patterns matured. Teams increasingly favor table-driven tests and integration tests over heavy mock setups. Generate mocks for external dependencies you cannot control. Write real implementations for everything else.
Manual Constructors Beat DI Frameworks
Your dependency graph has 8 nodes. You add Wire to generate the wiring code. Now you have 8 nodes, a provider set file, a wire.go file, a wire_gen.go file, and a build step. The generated code does exactly what 15 lines of manual constructor calls would do.
Manual constructors are the default in 2026. Write NewServer(db, logger, cache) and move on. Your IDE autocompletes the arguments. The compiler catches missing dependencies. No code generation step, no framework concepts, no debugging generated code.
Wire generates dependency wiring at compile time. No runtime reflection, no container, no magic. It generates the constructor code you would write by hand. Reach for it only when your dependency graph grows past 15-20 nodes and manual wiring becomes genuinely tedious. Wire's development pace has slowed -- it still works, but it is not actively gaining features.
Fx is a runtime DI framework from Uber. It changes how you structure your entire application -- everything becomes an Fx module. Evaluate whether your team wants Fx-shaped code before adopting it. For most services, it is more machinery than the problem requires.
Never Bind Application Logic to a Vendor SDK
You integrate the OpenAI Go SDK directly into your service layer. Three months later your team adds Anthropic as a fallback provider. The OpenAI types are in your domain structs, your error handling, your retry logic. You are not switching providers -- you are rewriting your application.
In 2026, the official vendor SDKs (OpenAI, Anthropic, Google) cover streaming, function calling, and embeddings. They are also completely incompatible with each other. The rule: never bind application logic directly to a vendor SDK. Layer it.
Vendor SDK --> Thin internal adapter --> Domain interface
openai-go is the official OpenAI Go SDK. V3, Apache-2.0 license, requires Go 1.22+. Streaming, function calling, embeddings, and the Responses API -- all covered, actively maintained. Before this existed, sashabaranov/go-openai served the community well -- same pattern as gorilla/mux stepping aside once an official solution arrived. Use whichever you pick behind an adapter, not as your domain contract.
langchaingo ports LangChain patterns to Go. Go's type system and interfaces improve some of Python's LangChain abstractions. Justified when you need chains, agents, or retrieval pipelines -- not for simple API calls.
AI client libraries are high-switching-cost infrastructure now. Treat them like database drivers: define a Provider interface in your domain, implement one adapter per SDK, swap without touching business logic.
Task Runners: Taskfile Ends the Makefile Debate
Your Makefile has 200 lines, half of them are .PHONY declarations, and the new developer on the team cannot parse the tab-vs-spaces rules. You add a shell script. Now you have two build systems.
Taskfile uses YAML, supports dependencies between tasks, cross-platform by default. It is the boring choice for Go projects in 2026. One file, obvious syntax, no tab sensitivity.
Make works if your team already knows it. Do not migrate away from a working Makefile. Do not start a new project with one either.
Mage lets you write build tasks in Go. Appealing in theory. In practice, you are compiling your build system before you compile your application. Reach for it only if your build logic requires actual Go code -- conditional compilation, API calls, complex file operations.
Seven Rules For Picking Go Packages
-
Start with the standard library. You will know when you need more. If you do not know, you do not need more.
-
Your second package is sqlx. Everything else is negotiable.
-
Pick boring infrastructure packages. Save your innovation tokens for your actual product. Your router does not need to be interesting.
-
Version your tools.
go install github.com/some/tool@v1.2.3, not@latest. A tool update should not break your Friday. -
Familiarity beats optimality. The best package is the one your team already knows. A 15% performance improvement is not worth a team-wide learning curve.
-
Docs over benchmarks. If a package has impressive benchmarks and terrible documentation, run. You are not that sharp at 3am during an incident.
-
Check the commit history. One commit in two years means the package is either perfect or dead. Read the issues to find out which.
| Decision | Boring choice | When to upgrade |
|---|---|---|
| Routing | net/http.ServeMux | Chi only if you need grouping ergonomics |
| Database | sqlx (simple) / sqlc (DB-heavy) | sqlc when compile-time SQL safety matters |
| Logging | slog | Zerolog if profiler says so |
| Config | envconfig | Viper if you actually need file-based config |
| HTTP Client | net/http.Client + retry wrapper | Resty for 5+ external APIs |
| CLI | Cobra | urfave/cli for simpler tools |
| Testing | Testify | Standard library if your team prefers it |
| DI | Manual constructors | Wire only if graph gets painful |
| Task Runner | Taskfile | Make for minimalists |
The table is the whole decision. Pick the boring column. The interesting problems are in your domain, not your dependencies.