Gift list dev diary: backend persistence

All development for this project is shared on github at I will endeavor to focus on the most relevant snippets of code rather than go over all of them on this blog. For interested parties, the github repo is available to survey all of the minor details.

In this post we’ll use postgres to add a persistence layer to our web server, update our user resolvers to interact with postgres, and add transit handlers to allow us to send dates and timestamps over the network. The git commit for this post’s work has more novelty I won’t cover here including the configuration necessary to host postgres in docker, database migration tooling using flyway, and some postgres configuration for handling dates and timestamps.

As usual, this implementation requires a few technology choices. We’ll use postgres as our database; I think postgres is an excellent general-purpose database, and it’s freely available. For a jdbc wrapper we’ll use Sean Corfield’s next.jdbc. For a connection pool, we’ll use hikari-cp. Lastly, we’ll use honeysql to build SQL queries from data.

Setup postgres

Before making any postgres queries, we’ll create a namespace that manages the connection pool and has some utility functions to help us make queries.

(ns rocks.mygiftlist.db
  (:require [rocks.mygiftlist.config :as config]
            [mount.core :refer [defstate]]
            [hikari-cp.core :as pool]
            [next.jdbc :as jdbc]
            [next.jdbc.result-set :as result-set]
            [next.jdbc.prepare :as p]
            [clojure.string :as str]
            [honeysql.core :as sql]

(def datasource-options
  (merge {:auto-commit        true
          :read-only          false
          :connection-timeout 30000
          :validation-timeout 5000
          :idle-timeout       600000
          :max-lifetime       1800000
          :minimum-idle       10
          :maximum-pool-size  10
          :pool-name          "db-pool"
          :adapter            "postgresql"
          :register-mbeans    false}

(defstate pool
  :start (pool/make-datasource datasource-options)
  :stop (pool/close-datasource pool))

(defn- qualify
  "Given a kebab-case database table name, returns the namespace that
  attributes coming from that table should have."
  (when (seq table)
    (str "rocks.mygiftlist.type." table)))

(defn- snake->kebab [s]
  (str/replace s #"_" "-"))

(defn- as-qualified-kebab-maps [rs opts]
  (result-set/as-modified-maps rs
    (assoc opts
      :qualifier-fn (comp qualify snake->kebab)
      :label-fn snake->kebab)))

(def ^:private query-opts {:builder-fn as-qualified-kebab-maps})

(defn execute! [conn sql-map]
  (jdbc/execute! conn
    (sql/format sql-map :quoting :ansi)

(defn execute-one! [conn sql-map]
  (jdbc/execute-one! conn
    (sql/format sql-map :quoting :ansi)

We created a pool connection pool mount component as well as execute! and execute-one! query wrapper functions. execute! and execute-one! take a database connection and a honeysql query map, compile and execute the given query, and namespace the keywords in the results so that they match the rocks.mygiftlist.type.* namespaces we use for our attributes. There are also some verbose protocol implementations we’ve omitted that automatically coerce java.sql.Date and java.sql.Timestamp values into java.time.LocalDate and java.time.Instant values, respectively.

In order to make database queries from our resolvers, we’ll add the connection pool into our parser environment so resolvers can pull it out to use it. Because we’re using mount, we could also have referred to the pool var, but it’s good practice to explicitly pass in the dependency. To do that, we need to update the env-wrap-plugin in our parser definition.

  (fn [env]
    ;; Here is where you can dynamically add
    ;; things to the resolver/mutation
    ;; environment, like the server config,
    ;; database connections, etc.
    (assoc env
      ::db/pool db/pool)))

User resolvers

With that in place, we’re ready to implement proper versions of our placeholder user resolvers from the previous post.

(ns rocks.mygiftlist.model.user
   [com.wsscode.pathom.connect :as pc :refer [defresolver defmutation]]
   [rocks.mygiftlist.db :as db]
   [rocks.mygiftlist.type.user :as user]))

(defresolver user-by-id
  [{::db/keys [pool]} {::user/keys [id]}]
  {::pc/input #{::user/id}
   ::pc/output [::user/id ::user/email ::user/auth0-id ::user/created-at]}
  (db/execute-one! pool
    {:select [:id :email :auth0_id :created_at]
     :from [:user]
     :where [:= id :id]}))

(defn- assign-tempid [{::user/keys [id] :as user} tempid]
  (assoc user :tempids {tempid id}))

(defmutation insert-user
  [{::db/keys [pool]} {::user/keys [id auth0-id email] :as user}]
  {::pc/params #{::user/email ::user/auth0-id}
   ::pc/output [::user/id]}
  (cond-> (db/execute-one! pool
            {:insert-into :user
             :values [{:auth0_id auth0-id
                       :email email}]
             :upsert {:on-conflict [:auth0_id]
                      :do-update-set [:email]}
             :returning [:id]})
    id (assign-tempid id)))

The user-by-id resolver executes a SELECT statement that fetches the row from the user table that has the given id, and that’s all. The insert-user mutation requires more explanation. Our user table schema does not use the auth0 id as its primary key; we maintain our own unique IDs for users to prevent auth0 ids from dispersing into all corners of our system. When we insert a user, we shouldn’t add a new user if one already exists with the given auth0 id, so we do an upsert on that unique constraint. A consequence of this flow is that even though user ids are uuids, we cannot generate them in the client.

Fulcro allows optimistic updates which can be queued and executed serially with the use of temporary ids. With this implementation, we can pass a temporary id for a user from the client, and our parser will return a mapping from this temporary id to the real id returned by our database query.

Let’s take a brief pause from the parser implementation to consider how users will actually get inserted into our database. When a user signs up, they do so on auth0’s site. By default, no user is inserted into our database as part of that signup process. There are two main ways to persist that new user into our system.

First, auth0 has a feature called hooks that would allow us to execute some custom javascript after a user registers. We could hit an endpoint in that hook which creates the user in our database. I’m not a big fan of this approach for a few reasons. In order for the hook code to reach our local environment, we would need to keep a tool like ngrok running to expose a publicly accessible endpoint on our machine. This hook only runs once. If we recreate our local database, we’d have to manually seed any users that had already registered into the local database. Perhaps most importantly, network failures are inevitable. We would need a way to handle a failure to insert a newly registered user into our database.

My preferred solution to this problem is to upsert the current user after they log in as part of page load. This works the same locally as it does in production. If the insert fails once, it’s no problem. It will be attempted again as soon as the user refreshes the page. We can recreate our local database without fear. When we log in, our user will be inserted into the database again. In addition, we can do this insertion using a temporary id in an optimistic update so it doesn’t block any interaction on the page.

Let’s see what that the frontend’s set-current-user mutation looks like when it’s updated to insert the current user into the database.

(ns rocks.mygiftlist.model.user
   [rocks.mygiftlist.type.user :as user]
   [edn-query-language.core :as eql]
   [com.fulcrologic.fulcro.algorithms.normalized-state :refer [swap!->]]
   [com.fulcrologic.fulcro.mutations :as m :refer [defmutation]]))

(defmutation set-current-user [{::user/keys [id auth0-id email]
                                :as user}]
  (action [{:keys [state]}]
    (swap!-> state
      (assoc-in [:component/id :current-user] [::user/id id])
      (assoc-in [::user/id id] user)))
  (remote [_]
    (eql/query->ast1 `[(insert-user
                         #::user{:id ~id
                                 :auth0-id ~auth0-id
                                 :email ~email})])))

Note that we’re using query->ast1 to rewrite the transaction that gets sent to our remote to use the insert-user mutation we defined above, as documented in the fulcro book here. We call this mutation by passing it a tempid and the user’s auth0 id and email.

(comp/transact! SPA [(m.user/set-current-user
                       #::user{:id (tempid/tempid)
                               :auth0-id auth0-id
                               :email email})])))

Transit handlers

If we were to query for the created-at attribute of a user with the code we’ve gone over so far, the request would fail. Our parser gets back a java.time.Instant value from the database, but transit doesn’t offer readers and writers for instants out of the box. The implementation is only tangentially related to fulcro and pathom, but we’ll discuss it anyways since it’s tripped me up in the past.

In order for transit to serialize java.time.Instant values, we need to provide a write handler, which is responsible for serializing instants into strings, and a read handler, which is responsible for deserializing strings into instants. Instants will be encoded as “tagged values”, essentially a (tag, value) tuple where the tag indicates to the reader how the value should be deserialized.

There is an existing tag “m” for instants that by default uses javascript Date objects in clojurescript and java.util.Date objects in clojure. I’m fine with using javascript Date objects, but I’d like to use java.time.Instant objects in place of the deprecated java.util.Date ones. There are existing read and write handlers for javascript Dates that serialize and deserialize to the number of milliseconds since epoch; we should match that format in our implementation for instants.

Here’s what that looks like.

(ns rocks.mygiftlist.transit
  (:require [cognitect.transit :as t])
  #?(:clj (:import [java.time Instant])))

(def write-handlers
     {Instant (t/write-handler "m"
                (fn [^Instant t] (.toEpochMilli t))
                (fn [^Instant t] (.toString (.toEpochMilli t))))}
     :cljs {}))

(def read-handlers
     {"m" (t/read-handler
            (fn [s] (Instant/ofEpochMilli s)))}
     :cljs {}))

Even though we’re only defining additional clojure transit handlers, we’ve defined them in a cljc file. Since clojure and clojurescript handlers are highly related, it seems reasonable to colocate any future handlers in the same file.

Breaking this down, we’re passing t/writer-handler the tag which should be used, a function that returns a data representation of the value, and a function that returns a string representation of the value. The transit writer determines which write handler to use based on the type of the value, so write-handlers is keyed by class.

The transit reader has the benefit of using tags to determine which read handler to use, so read-handlers is keyed by tag. We only need to pass a deserialization function to t/read-handler.

Now that our handlers are written, we need to use them. Even though we’re not adding any clojurescript handlers, we’ll still pass in the empty map of additional handlers for completeness and to avoid potential future confusion. Fulcro offers a hook to pass in additional handlers via the request and response middleware that are passed to a remote.

(ns rocks.mygiftlist.application
  (:require [com.fulcrologic.fulcro.application :as app]
            [com.fulcrologic.fulcro.rendering.keyframe-render2 :as keyframe-render2]
            [com.fulcrologic.fulcro.networking.http-remote :as http-remote]
            [rocks.mygiftlist.transit :as transit]))

(defonce SPA
    {:optimized-render! keyframe-render2/render!
     :remotes {:remote (http-remote/fulcro-http-remote
                            identity transit/write-handlers)
                            identity transit/read-handlers)})}}))

Fulcro also offers hooks to pass in additional transit handlers to its wrap-transit-params and wrap-transit-response middlewares. We can pass them in like this:

(def handler
  (-> not-found-handler
    (wrap-api "/api")
    (wrap-transit-params {:opts {:handlers transit/read-handlers}})
    (wrap-transit-response {:opts {:handlers transit/write-handlers}})
    (wrap-defaults (assoc-in site-defaults
                     [:security :anti-forgery] false))

Let’s at least test that the server-side write handler is working by requesting the created-at value for our user. To do this, we’ll open up a clojurescript repl and type in the following, replacing YOUR_USER_UUID with your user’s id. (You can get your user’s id by running ./scripts/psql and executing the query SELECT * FROM "user";. You should also be able to find it in your app db if you’re using fulro-inspect.)

rocks.mygiftlist.client> (require '[ :as df])
rocks.mygiftlist.client> (defsc CurrentUser [this _]
                           {:query [::user/id ::user/email
                                    ::user/auth0-id ::user/created-at]
                            :ident ::user/id})
rocks.mygiftlist.client> (df/load! SPA [::user/id #uuid "YOUR_USER_UUID"] CurrentUser)

This should succeed, and you should be able to see a ::user/created-at value at the path [::user/id #uuid "YOUR USER_UUID" ::user/created-at] in your app db. If you’re not using fulcro-inspect, you can get this at the repl with:

rocks.mygiftlist.client> (-> SPA
                           (get-in [::user/id
                                    #uuid "YOUR_USER_UUID"

And there we have it! Our user resolvers are working with a postgres persistence layer, and we can send transit-encoded timestamps over the wire.

Prev: Gift list dev diary: initial backend Next: Gift list dev diary: parser tests