问题描述

对于那些在生产中运行 Go 后端的人:

For those of you running Go backends in production:

运行 Go 网络应用程序的堆栈/配置是什么?

What is your stack / configuration for running a Go web application?

除了人们使用标准库 net/http 包来保持服务器运行之外,我还没有看到太多关于这个主题的内容.我阅读使用 Nginx 将请求传递给 Go 服务器 - nginx 与去

I haven't seen much on this topic besides people using the standard library net/http package to keep a server running. I read using Nginx to pass requests to a Go server - nginx with Go

这对我来说似乎有点脆弱.例如,如果机器重新启动(没有额外的配置脚本),服务器不会自动重新启动.

This seems a little fragile to me. For instance, the server would not automatically restart if the machine was restarted (without additional configuration scripts).

是否有更可靠的生产设置?

Is there a more solid production setup?

顺便提一下我的意图 - 我正在为我的下一个项目计划一个由 Go 驱动的 REST 后端服务器,并希望在我投入过多资金之前确保 Go 能够实时启动该项目.

An aside about my intent - I'm planning out a Go powered REST backend server for my next project and want to make sure Go is going to be viable for launching the project live before I invest too much into it.

推荐答案

Go 程序可以监听 80 端口并直接为 HTTP 请求提供服务.相反,您可能希望在 Go 程序前使用反向代理,以便它侦听端口 80 并在端口(例如 4000)上连接到您的程序.执行后者的原因有很多:不必运行您的 Go 程序以 root 身份运行,在同一主机上为其他网站/服务提供服务、SSL 终止、负载平衡、日志记录等.

Go programs can listen on port 80 and serve HTTP requests directly. Instead, you may want to use a reverse proxy in front of your Go program, so that it listens on port 80 and and connects to your program on port, say, 4000. There are many reason for doing the latter: not having to run your Go program as root, serving other websites/services on the same host, SSL termination, load balancing, logging, etc.

我在前面使用 HAProxy.任何反向代理都可以工作.Nginx 也是一个不错的选择(比 HAProxy 更受欢迎并且能够做更多事情).

I use HAProxy in front. Any reverse proxy could work. Nginx is also a great option (much more popular than HAProxy and capable of doing more).

haproxy.cfg
haproxy.cfg
global
        log     127.0.0.1       local0
        maxconn 10000
        user    haproxy
        group   haproxy
        daemon

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        timeout connect 5000
        timeout client  50000
        timeout server  50000

frontend http
        bind :80
        acl  is_stats  hdr(host)       -i      hastats.myapp.com
        use_backend    stats   if      is_stats
        default_backend        myapp
        capture        request header Host     len     20
        capture        request header Referer  len     50

backend myapp
        server  main    127.0.0.1:4000

backend stats
       mode     http
       stats    enable
       stats    scope   http
       stats    scope   myapp
       stats    realm   Haproxy Statistics
       stats    uri     /
       stats    auth    username:password

Nginx 更简单.

/etc/init/myapp.conf
/etc/init/myapp.conf
start on runlevel [2345]
stop on runlevel [!2345]

chdir /home/myapp/myapp
setgid myapp
setuid myapp
exec ./myapp start 1>>_logs/stdout.log 2>>_logs/stderr.log

另一方面是部署.一种选择是仅通过发送程序和必要资产的二进制文件来部署.这是一个非常好的解决方案 IMO.我使用另一个选项:在服务器上编译.(当我建立所谓的持续集成/部署"系统时,我将切换到使用二进制文件进行部署.)

Another aspect is deployment. One option is to deploy by just sending binary file of the program and necessary assets. This is a pretty great solution IMO. I use the other option: compiling on server. (I’ll switch to deploying with binary files when I set up a so-called "Continuous Integration/Deployment" system.)

~/myapp/
~/myapp/

总的来说,整个事情与任何其他服务器设置没有太大区别:您必须有一种方法来运行您的代码并让它为 HTTP 请求提供服务.在实践中,Go 已被证明对这些东西非常稳定.

Overall, the whole thing is not very different from any other server setup: you have to have a way to run your code and have it serve HTTP requests. In practice, Go has proved to be very stable for this stuff.

这篇关于Golang 生产 web 应用配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!