Logger Patch Guide: Tag-Independent HTTP Logs in a Self-Hosted sGTM Setup
If you run self-hosted sGTM and conversion quality suddenly drops, start with runtime HTTP logs before touching individual tags.
In a Node.js-based sGTM container, a logger patch gives you two streams in one place: inbound requests entering the server and outbound requests sent to Meta, TikTok, GA4, CRM APIs, and other endpoints. That view helps you separate input issues, transformation issues, and delivery issues without guesswork.
If logging stays tag-level only, you can miss real production traffic in self-hosted environments and spend hours debugging the wrong layer. Runtime-level visibility first, tag-level debugging second.
Why this is possible
Server-side GTM is still an application process in a container (Node.js runtime), not a black box.
That means you can patch runtime-level HTTP APIs and observe network I/O without rewriting every integration path.
At the same time, this needs discipline:
- keep patches small and observable
- benchmark under load before production rollout
- treat upstream updates carefully, because runtime behavior can change
How it works
The model is simple:
- Hook incoming HTTP requests.
- Hook outgoing HTTP/HTTPS requests.
- Emit structured JSON logs with key fields:
- method
- URL
- status
- duration
- headers
- body (size-limited)
- Write to
stdoutso your logging stack can ingest events.
In practice, this gives you a clear timeline of what came in, what went out, and how long each step took.
Core ideas behind this approach
1) Monkey patching by design
You wrap original Node.js runtime methods (http / https) and observe traffic before/after the original behavior.
Why teams choose this:
- no need to rewrite business logic
- no need to touch every tag/integration
- one patch can cover broad HTTP I/O paths
2) Tag-independent visibility
Tag-level logging only sees what tag code exposes. Runtime patching sees process-level HTTP I/O, which is much more reliable in self-hosted environments.
This gives you:
- logs even when a specific tag is misconfigured
- a clear inbound-vs-outbound comparison
- less dependence on UI preview limitations
3) Out-of-container observability
Since logs go to stdout, you can inspect them from outside the pod/container:
kubectl logs- Fluent Bit / Vector / Promtail
- Loki / ELK / Datadog
You do not need to exec into containers to debug production traffic.
What this does not give you automatically
- business-event correlation by itself
- full root-cause classification
- distributed tracing semantics
- visibility into flows that bypass patched runtime APIs
Production logger patch (Node.js)
const https = require('https');
const http = require('http');
const { URL } = require('url');
let getCurrentTraceContext;
function getTraceContext() {
if (!getCurrentTraceContext) {
try {
const tracer = require('./tracer-patch.cjs');
getCurrentTraceContext = tracer.getCurrentTraceContext || (() => ({ trace_id: null, span_id: null }));
} catch (e) {
getCurrentTraceContext = () => ({ trace_id: null, span_id: null });
}
}
return getCurrentTraceContext();
}
const MAX_BODY_SIZE = 10 * 1024; // 10KB limit
function isBase64(str) {
if (!str || str.length < 4) return false;
const base64Regex = /^[A-Za-z0-9+/]+=*$/;
return base64Regex.test(str) && str.length % 4 === 0;
}
function tryDecodeBase64(value) {
try {
if (isBase64(value)) {
const decoded = Buffer.from(value, 'base64').toString('utf-8');
if (decoded.length > 0 && /^[\x20-\x7E\s]*$/.test(decoded)) {
return decoded;
}
}
} catch (e) {
}
return value;
}
function redactUrl(urlString) {
try {
const urlObj = new URL(urlString);
let redacted = false;
const decodedParams = {};
for (const [key, value] of urlObj.searchParams.entries()) {
const decoded = tryDecodeBase64(value);
if (decoded !== value) {
decodedParams[key] = decoded;
}
}
for (const [key, decodedValue] of Object.entries(decodedParams)) {
urlObj.searchParams.set(key, `[base64:${decodedValue}]`);
redacted = true;
}
return redacted ? urlObj.toString() : urlString;
} catch (e) {
return urlString;
}
}
function tryParseBody(buffer) {
if (!buffer || buffer.length === 0) return undefined;
try {
const str = buffer.toString('utf8');
if ((str.startsWith('{') || str.startsWith('['))) {
return JSON.parse(str);
}
return str;
} catch (e) {
return buffer.toString('utf8');
}
}
function log(type, method, url, status, duration, reqBody, resBody, reqHeaders, resHeaders) {
try {
const { trace_id, span_id } = getTraceContext();
const logEntry = {
type,
method,
url: redactUrl(url),
status,
duration_ms: duration,
timestamp: new Date().toISOString()
};
if (reqBody) logEntry.RequestBody = reqBody;
if (resBody) logEntry.ResponseBody = resBody;
if (reqHeaders) logEntry.RequestHeaders = reqHeaders;
if (resHeaders) logEntry.ResponseHeaders = resHeaders;
if (trace_id) {
logEntry.trace_id = trace_id;
logEntry.span_id = span_id;
}
console.log(JSON.stringify(logEntry));
} catch (err) {
}
}
function createImmediateBodyCapture() {
let chunks = [];
let totalSize = 0;
let finalized = false;
return {
addChunk: function (chunk) {
if (finalized || totalSize >= MAX_BODY_SIZE) return;
try {
const buf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk);
chunks.push(buf);
totalSize += buf.length;
} catch (e) {
}
},
finalize: function () {
if (finalized) return null;
finalized = true;
try {
if (chunks.length === 0) return null;
const result = Buffer.concat(chunks);
chunks = [];
totalSize = 0;
return result;
} catch (e) {
chunks = [];
totalSize = 0;
return null;
}
}
};
}
function spyOnReadableStream(stream, onComplete) {
try {
const capture = createImmediateBodyCapture();
const originalEmit = stream.emit;
stream.emit = function (event, ...args) {
try {
if (event === 'data' && args[0]) {
capture.addChunk(args[0]);
} else if (event === 'end') {
const body = capture.finalize();
if (body) {
onComplete(body);
}
}
} catch (e) {
}
return originalEmit.apply(this, arguments);
};
} catch (e) {
}
}
function spyOnWritableStream(stream, onComplete) {
try {
const capture = createImmediateBodyCapture();
const originalWrite = stream.write;
const originalEnd = stream.end;
stream.write = function (chunk, ...args) {
try {
if (chunk) capture.addChunk(chunk);
} catch (e) {
}
return originalWrite.apply(this, [chunk, ...args]);
};
stream.end = function (chunk, ...args) {
try {
if (chunk) capture.addChunk(chunk);
const body = capture.finalize();
if (body) {
onComplete(body);
}
} catch (e) {
}
return originalEnd.apply(this, [chunk, ...args]);
};
} catch (e) {
}
}
function patchRequest(module, protocol) {
const originalRequest = module.request;
module.request = function (...args) {
const startTime = Date.now();
let reqBodyBuffer = null;
try {
const req = originalRequest.apply(this, args);
spyOnWritableStream(req, (buffer) => {
reqBodyBuffer = buffer;
});
let url = 'unknown';
try {
if (args[0] instanceof URL) url = args[0].toString();
else if (typeof args[0] === 'string') url = args[0];
else if (args[0] && typeof args[0] === 'object') {
const host = args[0].hostname || args[0].host || 'localhost';
const path = args[0].path || '/';
const proto = args[0].protocol || protocol;
url = `${proto}//${host}${path}`;
}
} catch (e) {
}
req.on('response', (res) => {
let resBodyBuffer = null;
try {
spyOnReadableStream(res, (buffer) => {
resBodyBuffer = buffer;
});
res.on('end', () => {
try {
const duration = Date.now() - startTime;
const reqHeaders = req.getHeaders ? req.getHeaders() : req._headers;
const resHeaders = res.headers;
log('outbound', req.method || 'GET', url, res.statusCode, duration, tryParseBody(reqBodyBuffer), tryParseBody(resBodyBuffer), reqHeaders, resHeaders);
} catch (e) {
}
});
} catch (e) {
}
});
return req;
} catch (e) {
return originalRequest.apply(this, args);
}
};
const originalGet = module.get;
module.get = function (...args) {
const req = module.request(...args);
req.end();
return req;
};
}
patchRequest(https, 'https:');
patchRequest(http, 'http:');
const originalCreateServer = http.createServer;
http.createServer = function (...args) {
const server = originalCreateServer.apply(this, args);
server.on('request', (req, res) => {
const startTime = Date.now();
let reqBodyBuffer = null;
let resBodyBuffer = null;
try {
spyOnReadableStream(req, (buffer) => {
reqBodyBuffer = buffer;
});
spyOnWritableStream(res, (buffer) => {
resBodyBuffer = buffer;
});
res.on('finish', () => {
try {
const duration = Date.now() - startTime;
const protocol = (req.socket && req.socket.encrypted) ? 'https' : 'http';
const host = req.headers.host || 'unknown';
const url = `${protocol}://${host}${req.url}`;
if (req.url === '/healthz') return;
const reqHeaders = req.headers;
const resHeaders = res.getHeaders();
log('inbound', req.method, url, res.statusCode, duration, tryParseBody(reqBodyBuffer), tryParseBody(resBodyBuffer), reqHeaders, resHeaders);
} catch (e) {
}
});
} catch (e) {
}
});
return server;
};
This patch already includes inbound/outbound capture, request/response body spying with size caps, URL normalization, get() support, healthz filtering, and optional trace context wiring via tracer-patch.cjs.
Production hardening still recommended:
- strong redaction for sensitive fields
- endpoint-specific sampling and filters
- schema/versioning for log payloads
- env-based controls (body limit, header allowlist, verbosity)
Implementation checklist
Step 1: Define a log contract
Minimum fields:
type(inboundoroutbound)methodurlstatusduration_mstimestamp
Useful optional fields:
request_headersresponse_headersrequest_bodyresponse_bodytrace_id,span_id
Step 2: Control log volume
- cap body size (for example, 10 KB)
- skip large binary/stream payloads
- filter noisy endpoints
Step 3: Redact sensitive data
Mask at minimum:
authorizationcookietokenaccess_tokenrefresh_tokenpasswordsecretapi_key
Step 4: Inject patch at runtime
A common runtime injection pattern:
NODE_OPTIONS=--require /app/logger-patch.cjs
Step 5: Ship logs centrally
app stdout -> log collector -> Loki/ELK/Datadog -> dashboards + alerts
Docker / Kubernetes quick apply
Docker flow
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
COPY logger-patch.cjs /app/logger-patch.cjs
ENV NODE_OPTIONS="--require /app/logger-patch.cjs"
CMD ["node", "server.js"]
Kubernetes flow
kubectl -n <namespace> create configmap logger-patch \
--from-file=logger-patch.cjs=./logger-patch.cjs \
-o yaml --dry-run=client | kubectl apply -f -
kubectl -n <namespace> apply -f deployment.yaml
kubectl -n <namespace> rollout restart deployment/<app-name>
kubectl -n <namespace> logs deployment/<app-name> --tail=200 -f
If you run multiple containers per pod, patch the correct Node.js container.
ConfigMap + deployment example:
apiVersion: v1
kind: ConfigMap
metadata:
name: logger-patch
data:
logger-patch.cjs: |
// paste your logger-patch.cjs here
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: your-image:latest
env:
- name: NODE_OPTIONS
value: "--require /app/logger-patch.cjs"
volumeMounts:
- name: logger-patch
mountPath: /app/logger-patch.cjs
subPath: logger-patch.cjs
readOnly: true
volumes:
- name: logger-patch
configMap:
name: logger-patch
Risks and mitigations
Sensitive data leakage
Risk: logs can contain personal or secret data.
Mitigation: strict redaction, allowlist-based fields, short retention, RBAC access.
Log cost explosion
Risk: traffic growth can spike storage/processing cost.
Mitigation: sampling, body caps, endpoint filters, environment-based log levels.
Performance overhead
Risk: patching + serialization adds CPU/memory overhead.
Mitigation: keep processing lightweight, cap payload size, benchmark under load.
Noise instead of signal
Risk: too many logs with low value.
Mitigation: consistent schema, focused fields, dashboards for latency/error/inbound-vs-outbound deltas.
Where this is most useful
- debugging CAPI pipelines
- validating outbound payload quality before ad platforms receive data
- investigating webhook failures
- enforcing data contract checks
- incident response when conversion rates suddenly drop
Final takeaway
This is not a full observability platform, and it does not need to be.
A runtime logger patch is a compact layer that helps you see what came in, what went out, and where failures start. With proper redaction and log-volume controls, it becomes a practical debugging baseline for self-hosted tracking.