Versions: GoLang 1.10.2 Kafka 4.4.1 Docker 18.03.1

I'm trying to use Shopify's Sarama package to test out my Kafka instance. I used Docker compose to stand up Kafka/Zookeeper and it is all successfully running.

When I try to create a Producer client with Sarama, an error is thrown.

When I run the following

    package main

import (
"fmt"
"log"
"os"
"os/signal"
"time"

"strconv"

"github.com/Shopify/sarama"

)

func main() {


// Setup configuration
config := sarama.NewConfig()
config.Producer.Return.Successes = true
config.Producer.Partitioner = sarama.NewRandomPartitioner
config.Producer.RequiredAcks = sarama.WaitForAll
brokers := []string{"localhost:29092"}
producer, err := sarama.NewAsyncProducer(brokers, config)
if err != nil {
    // Should not reach here
    panic(err)
}

defer func() {
    if err := producer.Close(); err != nil {
        // Should not reach here
        panic(err)
    }
}()

I get this

[sarama] 2018/06/12 17:22:05 Initializing new client

[sarama] 2018/06/12 17:22:05 client/metadata fetching metadata for all topics from broker localhost:29092

[sarama] 2018/06/12 17:22:05 Connected to broker at localhost:29092 (unregistered)

[sarama] 2018/06/12 17:22:05 client/metadata got error from broker while fetching metadata: EOF

[sarama] 2018/06/12 17:22:05 Closed connection to broker localhost:29092

{sarama] 2018/06/12 17:22:05 client/metadata no available broker to send metadata request to

[sarama] 2018/06/12 17:22:06 Closing Client panic: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

goroutine 1 [running]: main.main() /Users/benwornom/go/src/github.com/acstech/doppler-events/testprod/main.go:29 +0x3ec exit status 2

Sarama did try several times in a row to create a producer client, but failed each time.

My understanding of Sarama's "NewAsyncProducer" method is that it calls "NewClient", which is invoked regardless of whether you are creating a Producer or Consumer. NewClient attempts to gather metadata from the Kafka broker, which is failing in my situation. I know it is connecting to the Kafka broker, but once it connects it seems to break. Any advice would be helpful. My network connection is strong, I can't think of anything interfering with the server. As far as I know, I only have one broker and one partition for the existing topic. I don't think I have to manually assign a topic to a broker. If my client is connecting with the broker, why can't I establish a lasting connection for my producer?

This is from the kafka log file right before it dies.

__consumer_offsets-5 -> Vector(1), connect-offsets-23 -> Vector(1), __consumer_offsets-43 -> Vector(1), __consumer_offsets-32 -> Vector(1), __consumer_offsets-21 -> Vector(1), __consumer_offsets-10 -> Vector(1), connect-offsets-20 -> Vector(1), __consumer_offsets-37 -> Vector(1), connect-offsets-9 -> Vector(1), connect-status-4 -> Vector(1), __consumer_offsets-48 -> Vector(1), __consumer_offsets-40 -> Vector(1), __consumer_offsets-29 -> Vector(1), __consumer_offsets-18 -> Vector(1), connect-offsets-14 -> Vector(1), __consumer_offsets-7 -> Vector(1), __consumer_offsets-34 -> Vector(1), __consumer_offsets-45 -> Vector(1), __consumer_offsets-23 -> Vector(1), connect-offsets-6 -> Vector(1), connect-status-1 -> Vector(1), connect-offsets-17 -> Vector(1), connect-offsets-0 -> Vector(1), connect-offsets-22 -> Vector(1), __consumer_offsets-26 -> Vector(1), connect-offsets-11 -> Vector(1), __consumer_offsets-15 -> Vector(1), __consumer_offsets-4 -> Vector(1), __consumer_offsets-42 -> Vector(1), __consumer_offsets-9 -> Vector(1), __consumer_offsets-31 -> Vector(1), __consumer_offsets-20 -> Vector(1), connect-offsets-3 -> Vector(1), __consumer_offsets-1 -> Vector(1), __consumer_offsets-12 -> Vector(1), connect-offsets-8 -> Vector(1), connect-offsets-19 -> Vector(1), connect-status-3 -> Vector(1), __confluent.support.metrics-0 -> Vector(1), __consumer_offsets-17 -> Vector(1), __consumer_offsets-28 -> Vector(1), __consumer_offsets-6 -> Vector(1), __consumer_offsets-39 -> Vector(1), __consumer_offsets-44 -> Vector(1), connect-offsets-16 -> Vector(1), connect-status-0 -> Vector(1), connect-offsets-5 -> Vector(1), connect-offsets-21 -> Vector(1), __consumer_offsets-47 -> Vector(1), __consumer_offsets-36 -> Vector(1), __consumer_offsets-14 -> Vector(1), __consumer_offsets-25 -> Vector(1), __consumer_offsets-3 -> Vector(1), __consumer_offsets-30 -> Vector(1), __consumer_offsets-41 -> Vector(1), connect-offsets-13 -> Vector(1), connect-offsets-24 -> Vector(1), connect-offsets-2 -> Vector(1), connect-configs-0 -> Vector(1), __consumer_offsets-11 -> Vector(1), __consumer_offsets-22 -> Vector(1), __consumer_offsets-33 -> Vector(1), __consumer_offsets-0 -> Vector(1), connect-offsets-7 -> Vector(1), connect-offsets-18 -> Vector(1))) (kafka.controller.KafkaController) [36mkafka_1 |[0m [2018-06-12 20:24:47,461] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController) [36mkafka_1 |[0m [2018-06-12 20:24:47,462] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)