What features don’t we want (yet)?#
There are a number of common features that we aren’t going to implement. They are important parts to many chat applications, and could certainly be added to what we we’re building here, but would detract from the main points.
Chat channels are an incredibly common feature that we won’t be implementing. It is extremely unusual to see a chat server without separate channels for conversations, but in this case the cost of added complexity in the front end is not sufficiently offset by the value of what we would learn implementing the backend aspects.
An even more essential aspect of chat applications that we are leaving out is HTTPS . Using HTTPS is essential for any website that has users, but typically it is set up with a proxy or load balancer that terminates the HTTPS connection and forwards the request to an application that doesn’t handle the HTTPS connection itself. So, leaving it out of our application is more like not completing the setup than actually leaving out a feature. It is something that can and should be added to the chat server in the future.
What features do we want?#
Like the rest of the applications we have built, we are going to simplify our chat server requirements to focus on the most essential aspects. Every chat server has two essential pieces - users and messages .
Beyond that, chat servers usually have some concept of authentication , so that the users know who each other are, and can have some confidence that the person they’re talking to with the same name as a previous person is actually the same person.
A number of chat servers also include chat history , so you can read previously-sent messages, even if you weren’t around to see them when they were sent.
So, we will make sure our server has:
- users
- messages
- authentication
- chat history
How do we want to store our chat history?#
As is common with every web application that wants to store data, we need a database. But there are many kinds of databases which differ based on the storage needs of an application. The main database types fall within relational and non-relational categories, also sometimes known as SQL and NoSQL.
The primary requirements for our database are:
- retrieving data in an ordered fashion
- retrieving data that relates to other data we are storing
So we are going to opt for a relational database.
We could have chosen some variety of NoSQL database, but not all of them meet both our requirements. Key-value stores, for example, often don’t allow one to access data in an ordered fashion. Their main purpose is to provide specific pieces of data as fast as possible, rather than letting you search for a collection of data that meets certain criteria.
Document stores, another type of NoSQL database (the most popular of which is probably MongoDB) do allow for the ordered retrieval of documents, but don’t always have the ability to retrieve data relating to other data in the database. We could certainly use a document store for our chat server, but due to the way our data will be stored, a SQL database matches our needs best.
Among SQL databases, we have a number of options: PostgreSQL, MySQL and SQL Server are the most common server variants of SQL, while SQLite is the most common embedded variant. Any of them would be fine choices, but because we are erring on the side of simplicity here, we will use SQLite .
Having an embedded database will make the setup easier. You will not need to set up a database server, or an admin account, or ensure that there is a user with the proper privileges available. You only need to know what file you want SQLite to use.
Now that we know what database we will be using, we can talk about how we will be using it.
Object-relational mapping (ORM)#
Object-relational mappings, or ORMs, are an incredibly useful tool in the web development kit. They create a common interface to interact with any flavor of SQL, and set up a mapping between data objects as they exist in the program and data objects as they exist in the database. One could still use SQL to do this mapping, and many ORMs even have a RAW SQL concept, allowing the user to write a SQL query that will be directly executed on the database, rather than generated from the ORM constructs. But more typically that option is only used for more complicated SQL queries, and most of the common cases are very well covered by the ORM.
ORMs are also useful for other features they offer for the data storage lifecycle. You can generally add hooks to various points when changing data, for example:
- you could set up some code to run directly after a user has been created to complete other parts of your user setup flow
- you could run a check before a user is saved to ensure that the data being added passes validation that is outside of the database
Hooks around lifecycle events are a great reason to use an ORM. In this chapter, we will be using GORM. It is one of the most popular Go ORMs, with all of the features we need. It’s also relatively straightforward and easy to work with.
Although there are a large number of solid Go ORMs, another very popular one that deserves a specific mention is Beego. It’s definitely worth taking a look at if you find that GORM is missing a feature you particularly like, or if you just want a slightly different take on what an ORM written in Go can look like.
Writing our database code#
Now that we have our general feature set and know what we’re building, it’s time to put together most of our database-related code.
Start with a database.go
file.
The first thing we need is a general Config
struct. It will store every piece of state that we will need in our handlers. The reason for setting up this struct is to avoid needing to have global state. For the moment, the only thing we need to store is a database connection, making our Config
struct rather simple:
type Config struct {
DB *gorm.DB
}
As mentioned, we will be using GORM for our ORM, so the database connection is a *gorm.DB
, and we need to add gorm.io/gorm
to our imports at the top of the file.
Our user model#
Moving on, we need to define our User
struct. Because GORM will use this struct to create the database table that users will be stored in, we want this definition to include everything that the table does, including relationships to other tables. In this case, the only relationship will be to Message
, because all messages are sent by users, so a User
connects to every Message
.
type User struct {
gorm.Model
Name string `gorm:"uniqueIndex"`
Password string
Messages []Message `gorm:"foreignKey:UserName"`
}
Let’s go through the user model. As mentioned, it has a foreign key relationship to Message
(a struct we will write next). It also uses gorm.Model
, which automatically includes an ID, and CreatedAt, UpdatedAt, and DeletedAt fields. The ID is a uint
, DeletedAt is a sql.NullTime
, and the remaining fields are time.Time
. GORM explicitly shows the definition of gorm.Model
in their documentation.
We are adding a gorm:"uniqueIndex"
on the Name
field. That way, we ensure all names are unique and we won’t run into any user name confusion in our chat app.
It also includes Messages
, which is a slice of Message
. GORM will recognize this as a foreign key relationship, as defined in the has many section of their documentation. By default it will use a UserID
field included in the Message
struct. However, because it is important to know who sends any particular message, and not just their ID, we can set the foreign key to be the UserName
instead.
An aside on the foreign key decision#
Having a string foreign key is uncommon because of the size of the index that is created for the foreign key. Typically foreign keys are numbers which allows the index on the key to be smaller, because the key itself will typically be smaller. Strings are quite variable in size and frequently larger than integers.
However, this decision is convenient because our Message
model will be able to have all the necessary information to display a message within itself, so we won’t need to do a SQL JOIN in order to retrieve the contents of the message. That could be nice for a potential future where the database is sharded and JOINs become much more complicated.
A compromise between these two options could be using a numeric foreign key but including a UserName
field on the message, that could be updated whenever a user changes their name. That way, the index for the foreign key would be small, but the JOIN would still be unnecessary.
In our case, we will use the string for simplicity, and because renaming users isn’t a feature we’ll be adding, we don’t have to worry about that case.
Our message model#
Our Message
is relatively simple. We only need an ID, the UserName
we specified for our foreign key relationship with User
, the text of the Message, and the time it was CreatedAt
. We won’t allow users to update messages, which means we won’t need UpdatedAt
or DeletedAt
.
type Message struct {
ID uint `json:"-"`
UserName string `json:"user_name"`
Text string `json:"text"`
CreatedAt int64 `json:"created_at" gorm:"index,autoCreateTime:milli"`
}
In this model we have also specified how we are going to transfer messages as data. Our user model never needs to be sent between the client and server, but messages do. Their ID never needs to be sent, so we can tell the JSON encoder to ignore it.
The other notable aspects are with the message’s CreatedAt
field. We need to index it, because the time at which a message is sent determines not only where it is displayed, but also the order in which it will be retrieved. But we don’t want to store the time as a standard Go time.Time
display string, we want a timestamp, which GORM gives us if we use the int64
type. We can then add the precision we want, in this case milliseconds, with autoCreateTime:milli
.
We will be using the CreatedAt
date heavily to fetch both newer and older messages in an ordered manner, so this indexing and changing to a timestamp are essential.
Database setup and connection#
Now that we have our models, we can set up our database and make a connection to it. Our database connection only needs to do just that - connect to the database. We can get a connection which we put in our Config
model. From there, we can ensure that the database is set up. GORM has a convenient method called AutoMigrate
, which given a model will set up a database table that matches the model. It will also add fields if we specify new ones, but won’t remove old ones or update fields that already exist.
With that, we can create our database connection and ensure that our database is properly set up.
func DBConnect(dsn string) (*gorm.DB, error) {
return gorm.Open(sqlite.Open(dsn), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
}
func (c *Config) EnsureDBSetup() error {
err := c.DB.AutoMigrate(&User{})
if err != nil {
return err
}
return c.DB.AutoMigrate(&Message{})
}
In this case, I’ve decided to silence the logs so we don’t see any log lines that include user’s passwords. We will need three dependencies for the logger, and for connecting to a SQLite database:
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/gorm/logger"
Properly saving the user model#
Before we get to writing queries that will save our user model, we need to decide how we will store it. Because the user has a password, we can’t simply save it to the database. Plain text passwords are a major security issue waiting to happen.
In order to make sure that we aren’t making that mistake, we will hash and salt the user’s password before storing it. Very conveniently, Go has an /x/
package for the bcrypt hashing algorithm, which is commonly used for storing password hashes. Even better, the method for generating the hashed password also salts the password so we don’t need to handle that separately. For more information on password hashing and salting, see the Wikipedia article about it.
Because we will be doing more cryptography than just the password hashing, let’s create a file called crypto.go
for all our cryptography-related functions.
In crypto.go
, we can import the bcrypt package, "golang.org/x/crypto/bcrypt"
, and write a function that will help us create the password hash and later check the password hash when the user logs in.
func CreatePassword(password []byte) ([]byte, error) {
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
return hashedPassword, err
}
func CheckPassword(hashedPassword, password []byte) error {
return bcrypt.CompareHashAndPassword(hashedPassword, password)
}
These two methods will allow us to create users and allow them to log in. So let’s go back to database.go
and write the functions that will carry out those actions in the database. These functions will need a database connection, which we have stored in Config
, so we can make them use the struct like we did for EnsureDBSetup
.
First, CreateUser
.
func (c *Config) CreateUser(name, password string) (*User, error) {
hashedPassword, err := CreatePassword([]byte(password))
if err != nil {
return nil, err
}
user := User{Name: name, Password: string(hashedPassword)}
if err := c.DB.Create(&user).Error; err != nil {
return nil, err
}
return &user, nil
}
Given a database connection, user and password, this will store our user in the database.
We first hash our password to properly store it, then we create a User
.
The next part is made extremely easy by GORM. We pass our User
into the Create
method and GORM handles storing it, including the generation of an ID for the user. By default it will use an auto-incrementing number for all user IDs.
We also need to check the error returned from the database when we create the user. As is common in Go, if we see an error we return that to the caller, otherwise we can return our created user.
Checking our login is a fair bit easier. All we need to do is retrieve our user’s password and call our CheckPassword
function on that, returning the result.
func (c *Config) CheckLogin(name, password string) error {
var user User
if err := c.DB.Where("name = ?", name).First(&user).Error; err != nil {
return err
}
return CheckPassword([]byte(user.Password), []byte(password))
}
Once again, using GORM gives us a convenient method to make a simple query for looking up our user by their name. Because we specified that user names are unique, we don’t have to worry about having more than one result, so we can call First
on our query to retrieve the user data.
One more thing while we’re here - we need to be able to create messages. Because there is no special processing here, the result is a simpler version of our CreateUser
function.
func (c *Config) CreateMessage(userName string, text string) (*Message, error) {
message := Message{UserName: userName, Text: text}
if err := c.DB.Create(&message).Error; err != nil {
return nil, err
}
return &message, nil
}
For now we are not going to write the function for retrieving messages. Because we need to retrieve them based on the time they were created, it will be more complicated than the functions we’ve written so far. So, let’s cover our API needs before we come back and write this function.
A future problem with auto-incrementing IDs#
If our chat server gets a lot of use, it is going to have a couple of problems in the future. Both problems relate to the fact that the message IDs use an auto-incrementing integer.
The first problem is with the rate of generation of new IDs. If we need to generate a lot because people are sending lots of messages, an auto-incrementing integer can have problems because it requires a single source of truth. We will need to find a way to generate IDs from more than one source, without conflicts from accidentally generating the same ID from two or more different sources.
The second problem deals with the fact that our IDs are integers, specifically uint
, which in some cases may only be as large as uint32
. When this is the case, its maximum size would be 4,294,967,295. Although that is a very large number, given enough time it would be very possible to surpass this with message ID requests.
There are other ways to make IDs, solving both problems discussed here. However, some of them come with their own drawbacks.
One option is a Universally Unique Identifier, or UUID. UUIDs solve the rate problem by being so large that they ensure uniqueness by pure probability. However that largeness has a cost when they need to be represented in a database - it makes indexes larger, and you lose any information that could come from the ID itself, such as newer messages having larger IDs than older messages.
Another option is an algorithm like the one Twitter announced in 2010 called Snowflake. It is essentially a unix timestamp combined with a unique number representing the machine that generated the ID, and finally a counter for however many IDs had been generated during that exact timestamp. The machine ID allowed them to split up the ID generation work and solve the rate problem, and the timestamp aspect means that the IDs still contain useful information, like when the ID was generated.
Luckily, our chat server is unlikely to run into these specific problems, but it’s always good to be aware of potential issues that are the result of scale.
Our API#
Our API is going to be relatively simple. We need to serve our HTML pages, handle login and the creation of users, and send and retrieve messages.
The first part of the API is ensuring that we have authenticated users.
Let’s talk about authentication#
We have the ability to check a user’s password to a hashed and salted version saved in the database. However, we don’t want to have to do that on every request, because that would require the user to send their password to us every time they took any action. Doing that would either be incredibly cumbersome for the user, or very insecure, as we would have to store their password somewhere to send it on every request.
Instead, we will do what everyone else does, and set up a session for the user using cookies. We have a couple options here.
The first option is to set up session cookies for each user. This essentially gives the user a cookie with a unique ID that we can use to look up their information every time they make a request. This is a very common and sensible way to handle user authentication.
The only drawback is that we have to store that information somewhere, which centralizes the looking up of user information. For example we could create a Session
struct and store it in our database the same way we store users and messages. Then we would simply do a lookup every time a user makes a request.
Or we could go with a second option, using a cryptographically-signed cookie that includes user information in it. The primary benefit of generating this kind of cookie rather than a session token is that when a user makes a request, we don’t need to look up their session information in the database or some other data store. We only need to verify that their signed token is valid, and then we know that the information it contains can be trusted, and we can use that.
Neither of these options is perfect.
For session tokens, you always have to look up the session information from a data store. Even if that data store is designed specifically for that kind of lookup, like the key-value store we wrote, it still involves a network request, and creates an additional point of failure.
Signed cookies containing data, on the other hand, don’t require the additional request nor even the storage of session information. That may seem like a huge win until you consider that all the information in the session has to be stored somewhere, and that somewhere is in the cookie, meaning every request a user makes sends all of that data, regardless of how much or little of it you need. So although there is a somewhat significant advantage, it’s not free.
In order to simplify our setup and not have to store and retrieve sessions, we are going to go with the second option, cryptographically-signed cookies. Specifically, we are going to use JSON Web Tokens , or JWT s. Going into exactly how JWTs work is beyond the scope of this book, but if you want to find out more you should check out jwt.io.
Adding JWTs#
JWTs will require the use of a library to provide the necessary pieces for creating, validating and pulling information from tokens. Luckily, a library has been written to make it easier to use JWTs with chi
: github.com/go-chi/jwtauth
.
The main thing the library needs to work is a struct called JWTAuth
. Conveniently, all the functions we will use are defined from this struct, and we can use a pointer to it so we can use it in a number of places. This means for us, it’s exactly like our database connection, so we should put it in the same place in database.go
.
type Config struct {
DB *gorm.DB
TokenAuth *jwtauth.JWTAuth
}
Now that we have that essential piece of information, let’s go back to crypto.go
and define our methods for making the token and retrieving information from it.
The only thing we need to store in the token is the user’s name. Since we won’t be looking up any user information, we don’t need to store their ID or any other information. Making the token will involve calling Encode
on our JWTAuth
that we’ve stored as TokenAuth
in our Config
.
func (c *Config) MakeToken(name string) string {
_, tokenString, _ := c.TokenAuth.Encode(map[string]interface{}{"user_name": name})
return tokenString
}
Because the token is JSON, it accepts any kind of information in a map so long as there are strings for keys. In order to be descriptive, we will store this information as "user_name"
, but we could have used any string for it.
The only remaining piece left to do is retrieve this user name from the token. For this, we actually won’t need TokenAuth
directly, because the jwtauth
library provides a chi
middleware function that will store all the necessary information in the context.Context
which is included in the http.Request
. Unsurprisingly, it also provides a method to retrieve the token data directly from context. Using that, we can write a function to get the user name from the context.
func GetUserNameFromContext(ctx context.Context) string {
_, c, _ := jwtauth.FromContext(ctx)
return c["user_name"].(string)
}
The jwtauth.FromContext
method returns us the token itself (which we don’t actually care about), the token data, and an error in case there was an issue retrieving the data. We are explicitly ignoring the error here, because we will already have validated the token by the time we call this function, so we can be sure it won’t fail if it gets this far.
With those final crypto.go
functions done, we are ready to move on to the server itself.
The server routes#
We have all of the surrounding pieces ready, so now we can make our server and all of the handlers we’re going to need.
The first step is to decide what routes we need our server to have, so let’s enumerate them and their paths.
- GET
/
- the login page - POST
/login
- the login handler - POST
/newuser
- the new user creation handler - GET
/chat
- the chat page - GET
/api/messages
- the get messages route - POST
/api/messages
- the send messages route, differentiated with the above by using POST requests
So in all there will be six routes. We also have to consider which of these need to be behind authentication and which don’t. Luckily that is a pretty simple decision - users will need to be logged in so they can see the chat as well as send and receive messages. The other routes will not. So from the list above, our first three routes don’t require authentication, and the last three do require authentication.
In a new file, server.go
, we can start to set everything up.
package main
import (
// We will have various standard library imports here eventually
"github.com/go-chi/chi"
"github.com/go-chi/chi/middleware"
"github.com/go-chi/jwtauth"
)
func main() {
// We will fill this in later
}
func (c *Config) SetupRoutes() *chi.Mux {
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Get("/", IndexHandler)
r.Post("/newuser", c.NewUserPostHandler)
r.Post("/login", c.LoginPostHandler)
r.Route("/", func(r chi.Router) {
r.Use(jwtauth.Verifier(c.TokenAuth))
r.Use(jwtauth.Authenticator)
r.Get("/chat", APIChatHandler)
r.Route("/api", func(r chi.Router) {
r.Get("/messages", c.APIMessagesHandler)
r.Post("/messages", c.APIMessagesPostHandler)
})
})
return r
}
Most of this should look similar to what we have done before, but there are a couple of new things:
First, we’re calling SetupRoutes
on Config
so we have access to both our JWT library and our database. It also lets us pass that same information into all of our handlers that need the database without having to use a global variable or create a database connection in the handler.
The other new piece is the r.Route
. This is a helper method chi
provides that allows us to group methods that require similar functionality or path prefixes. We are using it twice here, once to set up authentication for all of our routes that need it, and a second time to group both of our messages endpoints, which we are putting under the /api
path prefix.
So this sets up /chat
along with either message route to have authentication, and the message routes’ paths will both be /api/messages
.
Filling in the routes#
We have just defined our list of routes - now it is time to fill them in.
Let’s start with our IndexHandler, which is going to be incredibly simple. All it needs to do is serve our index page.
func IndexHandler(w http.ResponseWriter, r *http.Request) {
http.ServeFile(w, r, "index.html")
}
Just to make things easy, we are going to put our index.html
file in the same place as the rest of the files, because there aren’t that many of them. But if this were a larger application, we would probably put it in a directory specifically for HTML or potentially templates.
The index page itself will also be rather simple. We only need it for two things, creating a user and logging in, so we can have two forms, one pointing to LoginPostHandler
and the other to NewUserPostHandler
. Sticking with the common web practice of having a user confirm their password when creating an account, the primary difference between the two forms will be a “Confirm Password” field for user creation.
<!DOCTYPE html>
<html lang="en">
<head> </head>
<body>
<div style="font-size: 1.2em;">Login</div>
<form method="post" action="/login">
<div>
<label for="user">Name</label>
<input type="text" name="user" />
</div>
<div>
<label for="password">Password</label>
<input type="password" name="password" />
</div>
<div>
<button type="submit">Login</button>
</div>
</form>
<div style="font-size: 1.2em;">Register</div>
<form method="post" action="/newuser">
<div>
<label for="user">Name</label>
<input type="text" name="user" />
</div>
<div>
<label for="password">Password</label>
<input type="password" name="password" />
</div>
<div>
<label for="password">Confirm Password</label>
<input type="password" name="password_confirm" />
</div>
<div>
<button type="submit">Create User</button>
</div>
</form>
</body>
</html>
Again in the interest of simplicity, we won’t be including any CSS files, instead doing minimal styling of the page itself. An obvious improvement would be to actually add styling, but we’ll leave that as an exercise for the reader!
User creation and login#
As we can see from our forms above, we need to receive form results from a POST request. For our new user handler, we need to receive three pieces of information, “user”, “password” and “password_confirm”. We’ll then make sure all of those fields are not empty, compare our password to the confirmation, and then create the user.
After that, we might as well redirect the user to the login page so they can log in with their new credentials.
func (c *Config) NewUserPostHandler(w http.ResponseWriter, r *http.Request) {
r.ParseForm()
userName := r.PostForm.Get("user")
userPassword := r.PostForm.Get("password")
userPasswordConfirm := r.PostForm.Get("password_confirm")
if userName == "" || userPassword == "" {
http.Error(w, "missing user or password", http.StatusBadRequest)
return
}
if userPassword != userPasswordConfirm {
http.Error(w, "passwords do not match", http.StatusBadRequest)
return
}
if _, err := c.CreateUser(userName, userPassword); err != nil {
log.Printf("database query error: %q\n", err)
w.WriteHeader(http.StatusBadRequest)
return
}
http.Redirect(w, r, "/", http.StatusSeeOther)
}
As mentioned before, because we need to save the user to the database, we need to have access to the Config
struct which has our database connection in it. Beyond that, we’re merely getting the POSTed form data with r.ParseForm
, doing our validations, and creating the user.
One thing to note though is that if creating a user fails, there are a number of reasons it might have failed: we might no longer have a connection to the database, or the requester may have attempted to create a user that already exists. Both cases would return an error to us, but there is no standardized way to determine which error is which. So in this case, we will tell the user they made a bad request, but still log the error so we will know what happened. That way we can diagnose any issues that arise with this handler.
Moving on to the login handler, it’s going to be very similar. The only difference is we won’t be checking for the password confirmation, but we will be looking up the user and comparing the given password with our stored hash. Also, once we confirm the user and password match, we will be generating a JWT cookie for the user for future requests.
func (c *Config) LoginPostHandler(w http.ResponseWriter, r *http.Request) {
r.ParseForm()
userName := r.PostForm.Get("user")
userPassword := r.PostForm.Get("password")
if userName == "" || userPassword == "" {
http.Error(w, "missing user or password", http.StatusBadRequest)
return
}
if err := c.CheckLogin(userName, userPassword); err != nil {
http.Error(w, "login unsuccessful", http.StatusBadRequest)
return
}
token := c.MakeToken(userName)
http.SetCookie(w, &http.Cookie{
Name: "jwt",
Value: token,
})
http.Redirect(w, r, "/chat", http.StatusSeeOther)
}
We’ll make good use of our CheckLogin
and MakeToken
functions! If there aren’t any errors with login, the user is redirected to our /chat
endpoint.
Getting messages#
Before we can write our chat frontend, we need to write the methods for getting and retrieving messages. But first we should take a moment and think about what kinds of actions are typical for a chat application, and what kinds of queries we will need for those actions.
By our estimates, there are really only three things that a chat application does in its simplified form.
- retrieve the last few messages whenever a user enters the chat, so they can see what was recently said
- user receives any new messages that occur while they are in the chat.
- allow the user to look at older messages from the past.
Conveniently, the first and last of these are actually the same. Getting older messages (say any message earlier than a specific time) is the same thing as getting recent messages. The only difference is the given time for recent messages is “now”, rather than a specific time in the past.
Additionally, getting messages which have occurred while a user is in the chat is actually just the opposite of getting older messages - you are retrieving any messages newer than the last one you received.
So we can write two relatively simple queries, each of which are somewhat the opposite of each other, in order to get any messages we need.
The messages query#
Now that we’ve thought about our query needs, let’s go back to database.go
and add our final method. Because our queries will be the same but reversed, we can write a function that does both, just differing by the parameter we give it.
That can sometimes make the intent of the function unclear, especially if that parameter is a poorly-named boolean. To prevent that, we will use Go’s iota
to create a named type that will give us a bit more clarity as to what is happening. We can call it ByTime
, and it will determine if a query should look for newer
messages or older
messages.
type ByTime int
const (
newer ByTime = iota
older
)
Now we can write our last query. This will need to retrieve a fixed number of messages which are older or newer than a given date . So we can pass in the limit we want, the time we want to work from, and our newer
or older
parameter.
func (c *Config) GetMessagesByTime(limit int, t time.Time, b ByTime) ([]Message, error) {
var whereClause = "created_at > ?"
var orderClause = "created_at ASC"
if b == older {
whereClause = "created_at < ?"
orderClause = "created_at DESC"
}
query := c.DB.Table("messages").
Where(whereClause, t).
Order(orderClause).
Limit(limit)
var messages []Message
if err := query.Find(&messages).Error; err != nil {
return nil, err
}
return messages, nil
}
GORM gives us access to WHERE
, ORDER
and LIMIT
clauses within SQL. Whenever we are looking for newer
messages, we make sure the message created_at
is greater than the time we give, and that we sort messages in ascending order ( ASC
). ASC
is the default, but it doesn’t hurt to be explicit.
However, if we want older messages, we need to flip our created_at
comparison to “less than” and our order to descending ( DESC
). That last part is particularly important, otherwise we would always end up fetching the oldest messages, regardless of the created_at
date that we gave.
Because we’ve just set two different orders for messages, we’ll need to remember to take that into account when we’re displaying them later.
Message handlers#
We already wrote our method for saving messages to the database, CreateMessage
, so we can now write our message handlers. We’ve also already given the functions names, APIMessagesHandler
and APIMessagesPostHandler
, so now we just need to fill them in.
We’ll start with APIMessagesPostHandler
which is relatively simple. It needs to accept the text from a message and doesn’t require anything else. We will have the user name available from the authentication context, and we can trust it because our auth middleware has already validated the token before our handler gets called. The timestamp is added by our CreateMessage
function, so we don’t need to think about it.
func (c *Config) APIMessagesPostHandler(w http.ResponseWriter, r *http.Request) {
var message Message
var redirect bool
if r.Header.Get("Content-type") == "application/json" {
err := json.NewDecoder(r.Body).Decode(&message)
if err != nil {
http.Error(w, "malformed request body", http.StatusBadRequest)
return
}
} else {
r.ParseForm()
messageText := r.PostForm.Get("message")
if messageText == "" {
http.Error(w, "malformed request body", http.StatusBadRequest)
return
}
message.Text = messageText
redirect = true
}
userName := GetUserNameFromContext(r.Context())
_, err = c.CreateMessage(userName, message.Text)
if err != nil {
log.Printf("db create message failure: %q", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
if redirect {
http.Redirect(w, r, "/chat", http.StatusSeeOther)
} else {
w.WriteHeader(http.StatusCreated)
}
}
So in general, we receive the message text in the application/json
format, add the user name to it, and create the message. If it fails, we log it and let the user know. If it succeeds, we return a 201.
However, we are also allowing for the possibility that the user has JavaScript disabled and wants to be able to submit a message. Our overall design won’t allow for a user to have JavaScript disabled, but this endpoint is trying to show how we might go about handling that use case. If we don’t receive a message with the application/json
content type, we try to process it as a normal HTML form POST request, which requires us to redirect the user back to the chat if their message creation succeeds.
As an aside, instead of returning a 201, we could return the created message to the user and let them immediately display their own message in the chat. However, this adds additional complexity to the frontend. That’s because, in the time the user sent the message, another message may have been sent just before, but hasn’t been reflected in the chat logs. If the user immediately sees their own message, they would either see the other user’s message out of order, or potentially not at all if the chat is only fetching newer messages, like ours will.
So instead, we will simply tell the user that we have created their message, and let them eventually fetch it. It’s not ideal, as the user might have to wait a little bit to see what they sent. But that way they’re guaranteed to see all messages in the proper order.
Polling#
Before we can write our APIMessagesHandler
function, we need to determine how our messages will be retrieved. Chat servers have a few options when it comes to the retrieval of messages. One way is for the chat server to have a persistent connection to the client, typically via a websocket, and when a message comes in it is sent to all clients on their respective websockets. This method of getting messages is also known as pub/sub (publish/subscribe) messaging. The main issue with it is that requires a persistent connection and a method of managing those connections for all the users.
Another option for the chat server is to do polling. This method means that all clients will be regularly requesting messages, and as soon as they receive a response that either contains a message or indicates that the poll has timed out, they will request messages again. The main drawbacks for polling are the fact that every user will be making requests regularly, rather than relying on a persistent connection. This can increase load on the chat server as it has to handle a lot more requests overall than it would using pub/sub.
Polling is simpler to set up, though it can cause more complications at scale. However it is still a fine choice here.
So we can implement our APIMessagesHandler
now that we know what it needs to do: fetch messages in a manner that works with polling .
To better support polling, we can try something interesting in our endpoint. Instead of simply querying to see if we have new messages and then always returning to the user, we can retry our query if we don’t find any new messages. That way, a user will make fewer requests, as they will only receive a response when there are new messages, or after a number of attempted message queries that returned no results.
We will also be leveraging our GetMessagesByTime
, which needs to pull either earlier or later messages, so our handler should accept a parameter that determines that. The before
and since
parameters will match up to our newer
and older
types, and the parameters can take a simple timestamp, so all we need to do is convert from a string to an int64
, like our message’s CreatedAt
field.
There is also a case where we won’t be sending either parameter, and that is when the client initially connects and fetches messages - it won’t have a time to compare with. In that case we will default to the current time, and do the same thing as if we were fetching older messages, because all messages are in the past anyway!
With all that in mind, here we go.
func (c *Config) APIMessagesHandler(w http.ResponseWriter, r *http.Request) {
var (
err error
timeParam string
parsedTime int64 = time.Now().UnixNano() / 1e6
order ByTime = older
attempts int = 1
)
r.ParseForm()
timeParam = r.Form.Get("before")
since := r.Form.Get("since")
if since != "" {
attempts = 10
order = newer
timeParam = since
}
if timeParam != "" {
if parsedTime, err = strconv.ParseInt(timeParam, 10, 64); err != nil {
http.Error(w, "invalid 'before' or 'since' parameter sent", http.StatusBadRequest)
}
}
var messages []Message
for i := 0; i < attempts; i++ {
messages, err = c.GetMessagesByTime(10, parsedTime, order)
if err != nil {
log.Printf("db failed to get messages: %q", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
if len(messages) != 0 {
break
}
time.Sleep(1 * time.Second)
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
body, _ := json.Marshal(messages)
w.Write(body)
}
We’re setting up a lot at the start of this function, but all of it will be used soon after to set up the necessary message retrieval.
A quick note on the parsedTime
default: Go has both time.Now().Unix()
for a timestamp with second precision, and time.Now().UnixNano()
for a timestamp with nanosecond precision. Because there is no time.Now().UnixMilli()
, we make our own by dividing the UnixNano()
by 1e6
, or 100000
to get to millisecond precision.
The action we take is to call ParseForm
to see if we have any parameters. If we are fetching newer messages, 'since'
will be included, and we set things up for our polling-style request. Otherwise our defaults will let us either fetch messages newer than the given 'before'
time, or whatever the current time is. From there, we ensure that if we received a time parameter, that it parses properly to an int64
.
After that, we attempt to fetch messages. If we have been set up for polling, we will loop more than once to attempt to fetch messages, stopping early if we find any. We use Sleep
to ensure that we take some time between our fetches, otherwise we would return quickly after having made ten queries to the database. Finally, we return whatever messages we have found, if any, to the user.
It’s possible to return an empty slice of messages, so our client will have to handle that.
Writing the chat frontend#
We have one tiny server-side piece to do before we write our chat client, and that is to write the static file-serving chat handler. It looks almost exactly like our index handler:
func APIChatHandler(w http.ResponseWriter, r *http.Request) {
http.ServeFile(w, r, "chat.html")
}
Writing the HTML#
With that done, let’s talk about writing chat.html
. We are going to keep the HTML and CSS fairly minimal. It will be a list of messages, stored in a div, with a separate div and button for the chat box and send message button.
The base of it will look like this, and we will be filling in the <script>
tag with all of the parts necessary to interact with our server.
<!DOCTYPE html>
<html lang="en">
<head>
<style>
.message {
display: flex;
flex-direction: row;
justify-content: flex-start;
padding: 0 10px 0 10px;
}
span {
margin-left: 10px;
}
</style>
</head>
<body style="margin: 0;">
<div style="display: flex; flex-direction: column; height: 100vh;">
<h1 style="margin: 10px;">Chat Server</h1>
<div
style="display: flex; flex-direction: column-reverse; flex: 1 1 0%; width: 100vw; overflow-y: auto;"
>
<div id="messages" aria-live="polite">
<div class="message">
<button onclick="getOlderMessages()">Get Older Messages</button>
</div>
</div>
</div>
<form
method="post"
action="/api/messages"
style="display: flex; margin: 10px;"
>
<input
type="text"
name="message"
id="message-text"
style="flex-grow: 2; border: 1px solid #ccc; border-radius: 5px; padding: 10px;"
/>
<button
onclick="sendMessage()"
type="submit"
style="border-radius: 5px;"
>
Send
</button>
</form>
</div>
<script>
// We will be writing our frontend code here!
</script>
</body>
</html>
Let’s talk about the important parts of the above HTML and CSS before moving on to the JavaScript. There are a lot of important concepts here, and more than we will be able to mention, because we’re not frontend specialists. But we did talk with one (Thank you Katie Sylor-Miller), to make sure we’re not giving you an actively bad example, and in so doing learned quite a lot, and continue to be awed by this specialization within the programming diaspora. There is incredible technical depth on the frontend, and it is great to have excellent resources to help with this essential part of web programming.
Beyond thanking frontend specialists, there are a few key things to note with the above. We have two important HTML elements with id
s; the messages
div, which will contain all of our chat messages, and the message-text
input which will hold the value for whatever message our user is sending. The messages
div also has the aria-live="polite"
attribute, which is important for the accessibility of screen readers. ARIA live regions are essential for any part of a page that will change dynamically. We are setting this one to "polite"
to indicate that it should be shared with the reader so long as they are idle.
We use a form for our message submission to more closely match a normal data submission style within HTML. It also gives us a fallback for submitting messages if the user doesn’t have JavaScript enabled. Unfortunately for them, we don’t yet have a way to display messages without JavaScript, but that could be added, and our chat application would then be able to support that rare but possible case.
Writing the JavaScript#
We have two onclick handlers, getOlderMessages()
and sendMessage()
. We will be writing these in our JavaScript section to handle the concepts after which they are named.
On that note, we should talk about our requirements and then write our code.
Our frontend will need to send and retrieve messages . Retrieving messages has a few cases to handle:
- There’s the initial page load, where we fetch the ten most recent messages.
- We also want to fetch messages older or newer than a given time.
- Retrieved messages need to be either appended to our
messages
div if they are newer, or prepended if they are older. - Finally, because we are using polling, we want to continually check to see if there are new messages.
Let’s start with sending messages .
var messages = document.getElementById("messages"),
messageInput = document.getElementById("message-text")
function sendMessage() {
// Prevent the default form submission
event.preventDefault()
if (messageInput.value === "") {
// Prevent sending empty messages
return
}
fetch("/api/messages", {
method: "post",
headers: {
Accept: "application/json",
"Content-Type": "application/json",
},
body: JSON.stringify({ text: messageInput.value }),
})
.then(() => {
// Remove the sent message text on success
// and make the text box selected again
messageInput.value = ""
messageInput.focus()
})
.catch((error) => {
console.log("Error sending message")
console.log(error)
})
}
In the above we set up our connections to the page, both the messages container and the message-text input section. A message is sent by the user submitting the message form we have setup. However, that will send a message to our server with the application/x-www-form-urlencoded datatype. While we do support that, it’s not our ideal case. We’d much rather prefer application/json. In order to recieve that, we need to cancel the initial form submission and setup our own post request which will send JSON.
When things succeed, we should clear the message box, both as an indication to the user that the message send succeeded, and so they don’t have to delete the message that’s already been sent.
Moving on, we need to write our code that will fetch and display messages.
function getMessages(url, display, callback) {
fetch(url)
.then((resp) => resp.json())
.then((data) => {
data.forEach((msg) => {
display(msg)
})
setTimeout(callback, 1000)
})
.catch((error) => {
console.log(“Error with request, retrying in 5 seconds.”)
console.log(error)
setTimeout(callback, 5000)
})
}
function makeMessage(msg) {
var message = document.createElement(“div”)
message.className = “message”
message.dataset.time = msg.created_at
message.innerHTML = <span style="font-weight: 800;">${msg.user_name}</span><span>${msg.text}</span>
return message
}
function appendMessage(msg) {
var message = makeMessage(msg)
messages.appendChild(message)
}
function prependMessage(msg) {
var message = makeMessage(msg)
messages.insertBefore(message, messages.children[1])
}
function getOlderMessages() {
if (messages.children.length > 1) {
getMessages(
/api/messages?before=${messages.children[1].dataset.time}
,
prependMessage,
() => {}
)
}
}
// Continually try to fetch new messages.
function getLatestMessages() {
if (messages.lastChild !== null && messages.lastChild.dataset !== undefined) {
getMessages(
/api/messages?since=${messages.lastChild.dataset.time}
,
appendMessage,
getLatestMessages
)
} else {
getMessages("/api/messages", prependMessage, getLatestMessages)
}
}
getLatestMessages()
function getMessages(url, display, callback) {
fetch(url)
.then((resp) => resp.json())
.then((data) => {
data.forEach((msg) => {
display(msg)
})
setTimeout(callback, 1000)
})
.catch((error) => {
console.log("Error with request, retrying in 5 seconds.")
console.log(error)
setTimeout(callback, 5000)
})
}
function makeMessage(msg) {
var message = document.createElement("div")
message.className = "message"
message.dataset.time = msg.created_at
message.innerHTML = `<span style="font-weight: 800;">${msg.user_name}</span><span>${msg.text}</span>`
return message
}
function appendMessage(msg) {
var message = makeMessage(msg)
messages.appendChild(message)
}
function prependMessage(msg) {
var message = makeMessage(msg)
messages.insertBefore(message, messages.children[1])
}
function getOlderMessages() {
if (messages.children.length > 1) {
getMessages(
`/api/messages?before=${messages.children[1].dataset.time}`,
prependMessage,
() => {}
)
}
}
// Continually try to fetch new messages.
function getLatestMessages() {
if (messages.lastChild !== null && messages.lastChild.dataset !== undefined) {
getMessages(
`/api/messages?since=${messages.lastChild.dataset.time}`,
appendMessage,
getLatestMessages
)
} else {
getMessages("/api/messages", prependMessage, getLatestMessages)
}
}
getLatestMessages()
There is a lot here, but let’s start from the bottom.
When the page loads, we want to start fetching messages with getLatestMessages
. Because we won’t have any messages displayed yet, which means there won’t be a timestamp on the messages, we want to go to our default case of fetching the last ten messages sent.
getOlderMessages
, the function we mentioned defining before, is similar to getLatestMessages
, only it is using the before
get parameter instead of since
. It also checks to make sure that messages exist before sending a request, because if there were no messages retrieved when fetching the initial ten messages, there aren’t any others, so we shouldn’t try requesting them. Notably, instead of checking for a timestamp, it checks to make sure that there is more than one sub-element in the messages
section. There is always at least one element there, because the button “Get Older Messages” is always there.
Above these are functions for appending and prepending messages. The only caveat there is that appendMessage
adds a message to the end, whereas prependMessage
adds a message second from the top, to keep our “Get Older Messages” button in the right place.
Both of those functions call makeMessage
, which given the message text, creates the proper HTML element to add, complete with the user’s name and the proper styling.
Above all that is getMessages
, which uses the JavaScript fetch api to make a request to our server. It has a couple of additional arguments which take functions to determine what to do with the data it receives:
-
display
is either theappendMessage
orprependMessage
function, to properly handle message placement. -
callback
is most likely thegetLatestMessages
function itself, being set up to be called again so we can continue polling.
Having callback
as the third argument allows us to pass in an empty function when we call getMessages
from getOlderMessages
, so we don’t start more automatic message fetching.
getMessages
also sets a timeout of one second, or a longer one of five seconds when there is an error. This is because a failure could be due to load on the server, so we want to have a slower retry.
Now that we have our Javascript, we should discuss potential bottlenecks before talking about how we intend to deploy our code.
Some expected bottlenecks#
Now that we’ve written all of our code, we should consider any potential future bottlenecks arising from our architectural choices. We’ve already covered one of the possibilities above, the bottleneck of ID generation for new messages, but doing some more thinking at this stage will help us be ready to fix any issues that may arise.
Our use of SQLite is a potential bottleneck. Even though it is an incredibly performant database, the fact that it runs on the same server as our web application means that the CPU and memory cost of both the server and the database has to be split between them. Using a database like MySQL or PostgreSQL would allow there to be a separate host for the database and server, giving a little more flexibility and headroom, at the cost of requiring a network request to read and write data.
Long polling is another potential bottleneck. Because of how we have it set up, where we may make as many as ten queries for a single web request, the load on the database is increased more than the web server. In the future it would be good to switch our server to the pub-sub pattern and use a technology like websockets to remove this bottleneck.
At the moment, these are the most likely issues. There are certainly other possibilities, and it would be a good idea to think on this at more length, so we are better prepared when we run into issues. But for now, it’s a decent start and we can have a little more confidence in our chat server’s performance.
Deploying#
With everything in place, we can now talk about how we want to deploy our chat server!
We want our builds to be repeatable and our deploys to be contained . We will however still want some flexibility to handle the fact that our database is actually a file, and building and deploying a new version of our application shouldn’t require us to copy a potentially very large file around.
In order to retain these advantages whilst still allowing for decent flexibility, we are going to use Docker containers. Docker maintains several install guides for whatever operating system you might be using. You will need to install it to continue with the next sections.
Docker#
The central part of the Docker setup is the Dockerfile. We need to set one up that creates a minimal environment for our chat server whilst still supporting every piece we need. So, thinking about the specifics of what we want is important.
For the chat server, we need access to a file system in order to load our HTML files and to store our file-based database, SQLite. We also need a port to be exposed from the container so we can connect to it. Lastly, we need sufficient packages to build our Go code. Because we are using SQLite, a C
program, our program now requires cgo
to build. cgo
allows our Go program to call out to C
code directly, rather than running a C
program as a separate service an connecting via some other means. If you want to see some examples of cgo
, you should take a look at the Go package that GORM uses to interface with SQLite - https://github.com/mattn/go-sqlite3
With that, let’s make the Dockerfile.
FROM golang:1.16-alpine3.12 WORKDIR /go/src/github.com/fullstackio/reliable-go/chat-server ENV GO111MODULE=“on” ENV PORT=8080 EXPOSE 8080 ENV DB_DSN="/data/chat_server.db" VOLUME [ “/data” ] RUN apk add --no-cache gcc musl-dev COPY . . RUN go build -v -o /go/bin/server . CMD ["/go/bin/server"]
This will set the port number to 8080, set a location for our database and install dependencies.
An important point to note here is the VOLUME
section for our database. A volume is a concept in Docker that allows a container to access a file on the underlying host. This is super important for us here, because if we didn’t have it, our database file would be inside the container . That would mean that if we ever recreated our container, our database would be recreated when we started it up. However, because we are using a volume here, Docker will keep track of our database file outside of our container , so every change we make that involves recreating the container will not get rid of all of our data .
We can build our Dockerfile by running docker build --file Dockerfile . --tag chat-server
in our project directory. This will create the container of our chat server build and tag it with chat-server
for ease of use. You can confirm this worked with docker images
and looking for chat-server latest
.
Once that builds, which can take a while, especially due to the time it takes to install SQLite, we can run it with docker run --publish 8080:8080 --volume $(pwd)/data:/data --detach chat-server
.
As a quick explanation, the --publish 8080:8080
flag tells Docker to map the 8080 port on host to the 8080 port in the container, which gives the same effect as if we’d run the chat server from the terminal.
The --volume $(pwd)/data:/data
sets up our volume, and is an essential part of properly running this container. It maps the current directories data
folder to the /data
folder in the container. The current directory is obtained with $(pwd)
, a call to the pwd
command that is expanded into our running command.
Finally, we --detach
so we can have our terminal back, and we include chat-server
because that is the image we want to run.
You can check that your container is running with docker ps
.
You should now be able to head to http://localhost:8080 and see our chat server!
You can create an account with the lower form, which when successful should return you to the same page, and then log in using the same credentials on the top form. The chat client that shows up after that won’t be winning any style awards, but you can enter messages in the bottom bar and either click the “Send” button or hit enter, and your message will eventually show up in the chat.
Due to our choice of polling, the message might not show up instantly, but it should be there within a second.
Canary deploys#
One last thing before this chapter ends. We have now written a number of different projects, and combined them - like when we combined our canary deployment server with our key-value server. Let’s keep that tradition alive and combine our chat server with our canary deployment server!
This won’t be a full canary deploy, but based on how we’ve written things, it will be very easy to run all of them and send a message that will let us both see the difference between our chat servers and also show that our Docker volume lets us safely run two instances of the chat server at the same time.
First, a little bit of setup to show the difference between the servers.
Go to the server.go
file and find our last HTTP handler method, APIMessagesPostHandler
. Whenever a user on this new server writes a message, we’ll append “, yeah?” to the end of it, so it will be clear when someone is using our updated chat server versus the previous one.
func (c *Config) APIMessagesPostHandler(w http.ResponseWriter, r http.Request) { / code omitted / userName := GetUserNameFromContext(r.Context()) if _, err := c.CreateMessage(userName, message.Text+", yeah?"); err != nil { log.Printf(“db create message failure: %q”, err) w.WriteHeader(http.StatusInternalServerError) return } / code omitted */ }
Then we need to build a new Docker image with this update, so modifying our previous command means we can run docker build -f Dockerfile . -t chat-server-yeah
. It’s important that you keep your original chat server docker image around, likely named chat-server
, because we need to run both it and our new container in order to properly use our canary deployment server.
Then we can run this container, but instead of mapping the host port 8080 to 8080 again, we can map the 8081 host port to the container 8080 port: docker run --publish 8081:8080 --volume $(pwd)/data:/data --detach chat-server-yeah
. This will be using the same database file as our existing server, and we now have servers running on ports 8080 and 8081.
Conveniently, the defaults we set up for our canary deployment server map to these two ports, so we can go to that project’s directory and go run load_balancer.go
. This sets us up with a 50% random chance to hit either chat server.
So go to http://localhost:8888, log in with credentials you previously created (or create new ones), and log in. Now when you send a message, sometimes it’ll come back with “, yeah?” at the end.
Future improvements#
Though we’ve successfully made a chat server, there are a lot of places for potential improvement from both a usability and scalability standpoint. We talked about a number of potential scalability issues in several sections above. “An aside on the foreign key decision”, “A future problem with auto-incrementing IDs”, “Polling”, and “Some expected bottlenecks”. However, there are still a lot of usability improvements one could make, some of which would involve different scalibility concerns.
We could move to a web sockets model instead of using polling. The primary difference here is that instead of users constantly requesting messages, we would push new messages to the users via web sockets. This would greatly decrease the number if incomming requests, but it would greatly increase the number of open connections our servers have. In addition, we could combine a switch to web sockets with the introduction of a pub/sub mechanism, which would reduce query load on our database, but would require the setup of a system to handle pub/sub. A number of databases exist which handle pub/sub themselves, like Redis, and libraries exist to add pub/sub on top of existing databases like the SQLite we used here.
An important usability feature we didn’t add is having separate channels. Channels are a somewhat essential feature for chat services, as not every conversation needs to happen in the same place. Adding channels to our current chat server would not be terribly difficult from the backend perspective, as you could add a Channel
struct, include a foreign key to your channels in the Message
struct in the same way that there is a foreign key with the User
struct. This would then allow an update to the GetMessagesByTime
function to include a channel in the database query, and the backend would be able to successfully fetch messages separated by channels. Adding channels to the chat server would be a good exercise in extending an application beyond it’s initial design constraints.
Future thinking#
As you advance in your Go programming, you might be looking for future projects to keep you entertained and growing. Below we suggest some reading and project ideas for you to explore on your own. If you implement these and want to discuss them with our community, come post them on our Discord cha
Project ideas#
Implement a BitTorrent client#
Implementing BitTorrent has become a common exercise in the programming community. Some universities use it in their classes, and a quick Google search can find examples in almost every language, including Go.
But that being said, it’s an interesting exercise. Read through the official BitTorrent specification to understand the problem space.
If you want to diverge from the traditional, don’t implement a client, but implement a tracker. Trackers are the web servers involved in BitTorrent discovery. The specification for trackers are also included in the above specification. They are often web servers with stateful backends. If you look at HDVinnie Torrent-Tracker-Platforms, a list of open-source Torrent trackers, or Wikipedia’s comparison of BitTorrent tracker software, you can see that they are written in a variety of languages and backed by a variety of databases.
If you want to figure out what kind of availability requirements someone might request of your tracker, check out the paper “Availability in BitTorrent Systems”. This paper conducted a wide-scale analysis of tracker availability.
Build a multiplayer video game backend#
One of Nat’s favorite projects is to take classic video games and rewrite them in a modern language with added multiplayer support. Ken Pratt and Nat built a version of Asteroids called Hyperspace which uses Go as the multiplayer backend.
Some example games you could try implementing: Tetris, Frogger, Centipede and Pac-Man.
Build a cloud audit system#
As a developer running things on various cloud platforms, you can start encountering both large costs and security issues. Most of the platforms out there such as GCP, Azure and AWS are all built assuming you’re a company, not an individual. As such, they can be hard to manage on your own.
A cloud audit system is a tool that looks at all the actions happening in your cloud (most cloud platforms have free audit logs for you to process), and alerts you if anything unusual is happening. A rough set of requirements would be:
- parse all of the audit logs coming from the cloud since last run
- send an email if any audit logs look suspicious
- run once an hour
You could add the ability to parse the bill or other data sources. Nat sends himself emails when anyone (including Nat) runs commands on any of his VMs, and when any new resources are created.
Build a URL shortening service#
Steve has a deep love of URL shorteners. There’s a lot of research being done in how to create and serve billions of links quickly. You can read about things like Twitter Snowflake or how folks approach the problem as a system design question.
Building a URL shortener in Go starts out pretty simple, but think about:
- how to scale the database
- can you keep the link creation and serving in the same binary, or should they be separated out?
- how to load-test the system (we recommend checking out k6).
Build a mobile app backend#
This is probably the most generic problem description, but build an API that can work with a mobile app. What makes this interesting is thinking about dealing with data synchronization while the mobile apps are offline.
For extra credit, you can look into gomobile. gomobile is an extension of Go for building Android and iOS apps. It’s fairly limited, but totally possible!
Areas of more research#
Some of the areas we’ve touched on are huge topics of research. Below are some books and links if you want to look into these some more.
Distributed systems#
- Distributed Systems for Fun and Profit
- Designing Data-Intensive Applications
- Leslie Lamport’s Papers
- What We Talk About When We Talk About Distributed Systems