<

Yann Schwartz

Software Engineer

Stumbled into biggish data a decade ago, tried to batch and stream my way out of it, and now trying out a simpler way.

Past conferences

Arnaud Bailly / Yann Schwartz
Code Mesh LDN 2018
08 Nov 2018
15.35 - 16.20

One Log (INTERMEDIATE)

Every application has a narrator commenting its execution, be it a humble println or a more structured log. But this narrator is unreliable: It decides what's important and what's not, forgets to mention the juiciest parts of the plot, and usually rambles for gigabytes.

There's more narration coming out of your application: metrics, tracing, all the system chatter that surrounds a running process - databases, message queues, git dags, etc., which deep down are logs. On the other side of the spectrum, Event Sourcing means to exhaustively describe all event affecting the state of the application and focusing on its dynamics. Logs - all the way down.

But what if there was One Log? What if we used well structured messages, integrating in a single stream dtrace application logs, iostat metrics, prometheus signals, and domain events. What if we relinquished up-front filtering and throttling and let serendipity do its job? What if the separate realms of information (business events, kibana views, grafana boards) we work with were just views of a big stream of log events?

This session will be a live-running experiment exploring what information we can harvest from this hoard of data. Building upon a simple event source application, we'll aggregate more events, implement traffic replay as a reverse event log, embrace system logs, treat log streams as a language with its semantics and model, and see what that emerging narrative tells us.

OBJECTIVES

See logs as a system's main output. And what's a log stream grammar?