Distributed Timer With Phoenix LiveView
I recently had the pleasure of implementing a distributed timer in a Phoenix LiveView application. In particular, one user can start a timer, and the other users in their group will see the same timer with roughly the same time. The timer can be paused, restarted, stopped, and have time added to it. This post will outline the timer’s implementation, explaining my thought process with some code snippets along the way. I assume the reader has some understanding of the architecture of a LiveView application.
This post focuses specifically on state management and communication between the client and server. We’ll discuss the use of client-side hooks, server and client-pushed events, and Phoenix Pubsub along with the system design concerns that prompted their use. In the interest of brevity, code snippets elide many details outside of that focus—these snippets are for educational purposes and not production-ready.
In order to deliver a good user experience, the timer has a few requirements:
- Advances smoothly in the user’s browser regardless of network conditions
- May be off from other users’ timers by a few seconds
- Does not rely on the accuracy of the user’s clock
- Retains its state on page refresh or when any or all users disconnect
State
Let’s break down where these requirements demand that we store state.
Retains its state on page refresh or when any or all users disconnect
All liveviews and browser tabs in the group of users may be closed, and so we should not rely on them to store timer state. This means we’ll need to store some timer state in the database.
Advances smoothly in the user’s browser regardless of network conditions
We cannot rely on communication from the server to trigger the timer to tick down, so we must store some timer state in the client.
Database
I chose to store the server-side state in a table with two timer-related columns:
started_at
(optional timestamp)duration_seconds
(integer)
A running timer has started_at
set to when the timer started and duration_seconds
set to the number of seconds after started_at
the timer will expire. This representation assumes that the server’s clock is relatively close to the database’s clock. I’m happy to accept that occasionally these clocks may not be synchronized, but I expect this to work well the vast majority of the time.
A paused timer has started_at
set to null and duration_seconds
set to the number of seconds the timer has remaining.
A stopped timer has no row present in the database.
Frontend
Since one of our requirements is
Does not rely on the accuracy of the user’s clock
we will store state slightly differently in the frontend, as an object with two keys:
timer_running
(boolean)remaining_seconds
(integer)
By computing remaining_seconds
in the server and passing it to the client, we avoid relying on the accuracy of the user’s clock and accept that the network latency in communicating remaining_seconds
from the server to the client may result in different users’ timers being off by a few seconds.
Rendering
We need to render to the user the timer as represented by client-side state, so we cannot rely on the typical LiveView paradigm, where the DOM is a reflection of the state of the LiveView process. Instead, we will set phx-update="ignore"
on the timer element and update it manually using a hook:
<div id="timer" phx-hook="Timer" phx-update="ignore" data-test="group-timer">
<!-- Timer display updated by JavaScript -->
</div>
The hook looks something like this:
Hooks.Timer = {
mounted() {
this.handleEvent("timer_update", ({ remaining_seconds, timer_running }) => {
this.remainingSeconds = remaining_seconds;
// Clear existing interval if timer is not running
if (!timer_running && this.interval) {
clearInterval(this.interval);
this.interval = null;
}
// Start interval if timer is running and no interval exists
if (timer_running && !this.interval) {
this.updateDisplay();
this.interval = setInterval(() => {
this.remainingSeconds--;
this.updateDisplay();
}, 1000)
}
// Update display when timer is paused
if (!timer_running) {
this.updateDisplay();
}
})
this.pushEvent("timer_mounted", {})
},
// Clean up interval when element is unmounted
destroyed() {
if (this.interval) {
clearInterval(this.interval);
this.interval = null;
}
},
// Update formatted time in DOM
updateDisplay() {
const isNegative = this.remainingSeconds < 0;
const absoluteRemaining = Math.abs(this.remainingSeconds);
const minutes = Math.floor(absoluteRemaining / 60);
const seconds = absoluteRemaining % 60;
const formatted = `${minutes.toString()}:${seconds.toString().padStart(2, '0')}`;
this.el.innerText = isNegative ? `-${formatted}` : formatted;
}
}
The hook is responsible for ensuring that an interval is set to run every second when the timer is running, ensuring the remaining seconds goes down every second and that change is displayed in the DOM. It keeps the client-side state up to date as "timer_update"
events arrive from the server. It also sends a "timer_mounted"
event to the server to announce that it is ready for timer updates.
Communication
sequenceDiagram participant Browser participant LiveView participant DB Browser->>LiveView: pushEvent("timer_mounted") LiveView->>DB: Get timer (started_at, duration) LiveView->>Browser: push_event("timer_update") Browser->>Browser: JS Hook updates timer display
When the server receives a "timer_mounted"
event from the client, it fetches the timer data from the database, computes the remaining seconds, and pushes an event to the client with the client-side timer state. The event handler looks a bit like this:
def handle_event("timer_mounted", _, socket) do
timer = Timers.get_timer!(socket.assigns.group.id)
{:noreply,
socket
|> push_event("timer_update", timer_update_payload(timer))
|> assign(socket, timer: timer)}
end
defp timer_update_payload(%Timer{} = timer) do
remaining_seconds =
case timer.started_at do
# Timer is paused - return the stored duration as is
nil ->
timer.duration_seconds
# Timer is running - calculate remaining time
started_at ->
elapsed_seconds = DateTime.diff(DateTime.utc_now(), started_at, :second)
timer.duration_seconds - elapsed_seconds
end
%{
timer_running: not is_nil(timer.started_at),
remaining_seconds: remaining_seconds
}
end
defp timer_update_payload(nil) do
%{timer_running: false, remaining_seconds: 0}
end
There are similar event handlers for starting, pausing, stopping, or adding time to a timer, though these also execute the corresponding action.
This is sufficient to keep the DOM up to date for the user modifying the timer. But what about the other users in the group? We publish a pubsub message to a group-specific topic whenever the timer is changed. Each user’s LiveView process is subscribed to this pubsub topic and pushes a "timer_update"
event to the client when it gets an update from pubsub. Note how we use Phoenix.PubSub.broadcast_from!
so the publishing user doesn’t receive their own broadcast—we push the event directly to them instead, avoiding duplicate timer updates.
sequenceDiagram participant User participant Browser participant LiveView participant DB participant PubSub User->>Browser: Clicks "Start Timer" Browser->>LiveView: pushEvent("start_timer") LiveView->>DB: Insert timer (started_at, duration) LiveView->>Browser: push_event("timer_update") Browser->>Browser: JS Hook updates timer display LiveView->>PubSub: Publish timer update PubSub->>LiveView: (other users) Receive update LiveView->>Browser: push_event("timer_update") Browser->>Browser: JS Hook updates timer display
The pubsub handling looks something like below:
def mount(params, _session, socket) do
group = Groups.get_group!(params["group_id"])
if connected?(socket) do
Phoenix.PubSub.subscribe(
MyApp.PubSub, "group:#{group.id}:timer_update"
)
end
{:ok,
assign(socket, group: group, timer: Timers.get_timer!(group.id))}
end
def handle_event(
"start_timer",
%{"duration_seconds" => duration_seconds},
socket
) do
timer =
Timers.insert_timer!(%{
duration_seconds: duration_seconds,
started_at: DateTime.utc_now()
})
Phoenix.PubSub.broadcast_from!(
MyApp.PubSub,
self(),
"group:#{socket.assigns.group.id}:timer_update",
{:timer_update, timer}
)
{:noreply,
socket
|> push_event("timer_update", timer_update_payload(timer))
|> assign(socket, timer: timer)}
end
def handle_info({:timer_update, timer}, socket) do
{:noreply,
socket
|> push_event("timer_update", timer_update_payload(timer))
|> assign(socket, timer: timer)}
end
Optimistic Updates
More latency-sensitive operations, like adding time (which users may spam click) are implemented with optimistic updates. They update the client-side state immediately instead of waiting to get a success response back from the server.
sequenceDiagram participant User participant Browser participant LiveView participant DB participant PubSub User->>Browser: Clicks "Add one minute" Browser->>Browser: JS Hook updates timer display Browser->>LiveView: pushEvent("add_duration") LiveView->>DB: Update timer (duration) LiveView->>Browser: push_event("timer_update") Browser->>Browser: JS Hook updates timer display LiveView->>PubSub: Publish timer update PubSub->>LiveView: (other users) Receive update LiveView->>Browser: push_event("timer_update") Browser->>Browser: JS Hook updates timer display
This flow illustrates a nice consequence of structuring the timer updates from server to client to include the full state of the timer instead of a diff, like indicating that 60 seconds were added to the timer’s duration. Whenever we get an update from the server, it corrects any drift that may have occurred between client and server-side state. We can make optimistic updates with little risk, since the subsequent update from the server will correct any errors.
Testing
One of my favorite features of working with Phoenix LiveView is Phoenix.LiveViewTest
, which allows for writing end-to-end-style tests that run quickly and reliably. We can even test that timer changes propagate to other users’ liveview sessions, which would be difficult or impossible with any other tool I’m aware of. Liveview tests do not support running Javascript, so we cannot make assertions about the timer DOM element updating—we can only verify the element’s presence or absence in the DOM and that the server sends the right messages to the client.
Here is a lightly-edited version of my test verifying that when a user starts a timer, users receive the expected "timer_update"
event and see the timer DOM element. The helper functions generate/1
, user/0
, and group/1
insert fixture data into the database.
describe "timer functionality" do
test "moderator starting timer sends push event to participant", %{conn: pub_conn} do
# Create moderator and participant users
moderator = generate(user())
participant = generate(user())
# Create group with moderator
group = generate(group(participants: [participant], moderator: moderator))
# Both users load the page
{:ok, mod_view, _} = live(log_in_user(pub_conn, moderator), "/groups/#{group.id}")
participant_conn = Phoenix.ConnTest.build_conn()
{:ok, participant_view, _} =
live(log_in_user(participant_conn, participant), "/groups/#{group.id}")
# Moderator clicks the 1-minute timer button
mod_view
|> element("[data-test='start-timer-1m']")
|> render_click()
# Moderator should receive the timer_update push event
assert_push_event(mod_view, "timer_update", %{
timer_running: true,
remaining_seconds: 60
})
# Participant should receive the timer_update push event
assert_push_event(participant_view, "timer_update", %{
timer_running: true,
remaining_seconds: 60
})
# Both users should see the group-timer element in the DOM
assert has_element?(mod_view, "[data-test='group-timer']")
assert has_element?(participant_view, "[data-test='group-timer']")
end
end
Conclusion
We’re done implementing the distributed timer! In order to maintain state when there are no connected liveviews, we stored the timer state in the database. In order to keep each client synchronized with the timer state in the database, we used Phoenix Pubsub and server-pushed events. We used a client-side hook to keep the timer’s DOM element up to date using data from the server-pushed events. We even wrote a test verifying timer state is broadcast correctly to all clients.
We got some great leverage from our tools when writing this timer. We were able to push low-latency updates to a group of clients using Phoenix Pubsub. Without it, we would most likely need a queue and its attendant complexity. We were also able to write a simple, fast, and reliable test verifying that a DOM interaction triggers the right broadcast to all connected clients. A test like that would be complex, slow, and unreliable with almost any other framework.
This timer moves away from the “classic” liveview render loop, where state is updated in the server, and the server sends DOM updates to the client. As Jose Valim points out on the Dashbit blog, Liveview applications are not constrained to directly deriving the DOM state from Liveview assigns. Instead, Liveview provides the tools to store and communicate state in a way that matches user needs.