This is the first entry in the “Classical synchronization problems” chapter of “The Little Book of Semaphores”, the “producer-consumer problem”.

The text describes the problem as a common pattern where some threads are producers and some threads are consumers. It imposes the follow two constraints on the system:

  • While an item is being added to or removed from the buffer, the buffer is in an inconsistent state. Therefore, threads must have exclusive access to the buffer.
  • If a consumer thread arrives while the buffer is empty, it blocks until a producer adds a new item.

This is one of those cases where Go’s primitives provide everything needed to solve the problem, as those two constraints are met by channels without any additional work.

The code for the producer looks like this:

	event := waitForEvent()
	ch <- event

and the consumer looks like this:

	event := <-ch
	process(event)

This scales to any number of producers and consumers.

Even in the face of a trival solution, I think it’s important to stop and think about it for a second. The solution as presented in the book, requires two semaphores: one to act as a mutex to access a buffer, and one to keep track of the number of events in the queue. Furthermore, the solution has a delicate balance between the two semaphores to prevent deadlocks. Go’s runtime takes care of all of this, which is no small feat.

In a refinement, the book introduces the notion of a finite buffer, to deal with the very real situation of finite system resources. This is actually the default situation with Go, as events are passed between goroutines, not queued, as is the case in the book. Implenting an “infinte” buffer in Go requires the introduction of a helper goroutine, that consumes events from the producers, stores them in a queue local to that goroutine, and sends them to the consumers:

func helper(input, output chan Event) {
	queue := make([]Event, 0)
	var event Event

	for {
		out := output
		if event = nil {
			out = nil
		}

		select {
		case out <- event:
			event = nil
		case e := <-intput:
			queue = append(queue, e)
		}

		if event == nil && len(queue) > 0 {
			event, queue = queue[0], queue[1:]
		}
	}
}

What’s going on here?

If a producer is trying to send an event, we want to receive it, that’s the <- input part. If an event is available, we want to send it out to a consumer, that’s the out <- event part. The reason for not using output <- event is because it’s not necessarely true that an event is always available. This is easily handled by changing the channel on which we send the event out. If an event is available, we send the event out over the output channel. If an event is not available, we set out to nil because sending to a nil channel blocks forever, which means that the select can never enter that case. When we receive an event from the producer, we store it on the queue, and later we check if we need to pull an event out of the queue.

This queue is infinite as discussed in the book. If we wanted to have a finite bufer, we would go back to using a simple channel, which blocks if all the consumers are busy. We would make the channel buffered, using the channel itself as a queue, which would cause the producers to block if they are producing much faster than the consumption rate.

Once again Go’s primites allow for a very simple and straightforward solution to what’s otherwise a moderately complicated problem.