\ In the world of microservices, a single user request can trigger a cascade of calls across dozens of services. While this architectural style offers scalability and flexibility, it introduces a significant challenge: how do you effectively debug a request that spans multiple services? Traditional logging can be noisy and difficult to correlate. This post explores a powerful, elegant solution for distributed debugging that provides deep insights into your system's behavior with minimal overhead.
Imagine a user reports an issue. To diagnose it, you need to trace their request as it hops from service to service. You're interested in the state, decisions, and data at each step. How do you get this information without drowning in logs or attaching a debugger to every single service? The ideal solution would be:
The solution is a framework built around a few core concepts, leveraging the power of gRPC interceptors to create a "carrier" for debug information that travels with the request.
Let's break down the journey of a debuggable request.
?debug_levels=1ServiceA2|ServiceB1. This string is a compact way of saying: "Enable default debugging at level 1, enable level 2 for ServiceA, and level 1 for ServiceB."debug_levels parameter and injects it into the gRPC metadata (headers) of the request.servicedebug.AddMessage(ctx, myProtoMessage), to add any relevant protobuf message to the debug context. This is a cheap operation; if debugging isn't active for this service and level, the function returns immediately.debug_levels metadata to the outgoing request.This process repeats at every service in the call chain. The result is that the entrypoint service receives a response containing an aggregated collection of debug messages from the entire request lifecycle.
Let's make this more concrete with a few examples showing how a service integrates with the framework from end to end.
1. Integrating the Interceptors
First, the service needs to be configured to use the client and server interceptors. This is typically done in the service's main.go file where the gRPC server is initialized. The key is to chain the service debug interceptors with any other interceptors you might have.
// in main.go import ( "google.golang.org/grpc" "github.com/my-org/servicedebug" // Your internal framework path ) func main() { // ... setup listener, etc. // Chain the interceptors. The service debug interceptor should come early // in the chain to wrap the entire request lifecycle. server := grpc.NewServer( grpc.ChainUnaryInterceptor( // Other interceptors like auth, logging, metrics... servicedebug.UnaryServerInterceptor("MyAwesomeService"), ), grpc.ChainStreamInterceptor(/* ... */), ) // Register your service implementation pb.RegisterMyAwesomeServiceServer(server, &myServiceImpl{}) // ... start server }
2. Emitting Debug Messages in Your Service
Now, let's see how a developer would actually use the framework inside a service handler. The framework provides a simple function, like AddMessagef, which takes a debug level. The message is only constructed and stored if the request's debug level for this service is high enough.
// in your service implementation file import ( "context" "github.com/my-org/servicedebug" // Your internal framework path "github.com/my-org/some-internal-proto/infopb" ) // MyAwesomeService implements the gRPC service. type myServiceImpl struct{ // ... dependencies } func (s *myServiceImpl) GetData(ctx context.Context, req *pb.GetDataRequest) (*pb.GetDataResponse, error) { // ... main business logic ... // Let's add a debug message. This will only be evaluated if the debug // level for "MyAwesomeService" is 2 or greater for this specific request. servicedebug.AddMessagef(ctx, func() proto.Message { return &infopb.DetailedState{ Info: "Starting to process GetData request", IntermediateValue: 42, } }, 2) // The '2' is the verbosity level for this message. // ... call another service, run some computations ... result := "here is your data" // Add another message, maybe at a lower verbosity level. servicedebug.AddMessagef(ctx, func() proto.Message { return &infopb.Summary{ Info: "Finished processing, found data.", } }, 1) // Level 1, will be included if level is 1 or greater. return &pb.GetDataResponse{SomeData: result}, nil }
3. The Final Response
After the request has gone through ServiceA and ServiceB, the final JSON response from the gateway would look something like this. The service_debug field contains the aggregated messages from all participating services, giving you a complete picture of the transaction.
{ "some_data": "here is your data", "service_debug": { "ServiceA": { "any_messages": [ { "@type": "type.googleapis.com/my_org.infopb.DetailedState", "info": "Starting to process GetData request", "intermediateValue": 42 }, { "@type": "type.googleapis.com/my_org.infopb.Summary", "info": "Finished processing, found data." } ] }, "ServiceB": { "any_messages": [ { "@type": "type.googleapis.com/my_org.downstream.Status", "info": "Received request from ServiceA, processing lookup.", "lookupId": "xyz-123" } ] } } }
This structured, on-demand output provides deep visibility into your microservices architecture without the noise of traditional logging.
By following this simple pattern—adding a map field and implementing one method—any service can seamlessly integrate with the distributed debugging framework, making its internal state observable on demand.
AddMessage function. The complexity of propagation and collection is abstracted away.By treating debug information as a first-class citizen of the request lifecycle, we can turn the opaque, distributed nature of microservices into a transparent, observable system.
\ \


