Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker container exits with code 139 #24

Open
amarjohal opened this issue Sep 14, 2016 · 6 comments
Open

Docker container exits with code 139 #24

amarjohal opened this issue Sep 14, 2016 · 6 comments

Comments

@amarjohal
Copy link

The Loggly container always errors out seconds after launch with "projectname_loggly-docker_1 exited with code 139"

Any one run into this before?

@MasterKale
Copy link

I'm the guy who originally contacted Loggly about this issue. I haven't been able to figure out what's going on with either the latest release nor with 1.4.

Let me know what I can do to provide log info or whatnot. Docker didn't obviously expose any worthwhile logs, just the fact that the container continually errors out.

@ghost
Copy link

ghost commented Sep 29, 2016

We are seeing some wierd behavior where randomly one of our loggly containers would fail and shows exit code 139. These are running on CoreOs

ip-10-51-32-206 units # docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 8acee1b
Built:
OS/Arch: linux/amd64

Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 8acee1b
Built:
OS/Arch: linux/amd64

ip-10-51-32-206 units # cat /etc/lsb-release
DISTRIB_ID=CoreOS
DISTRIB_RELEASE=1010.5.0
DISTRIB_CODENAME="MoreOS"
DISTRIB_DESCRIPTION="CoreOS 1010.5.0 (MoreOS)"

docker file for loggly container:
Description=Sends all CoreOS Journal logs to Loggly
....
[Service]
TimeoutStartSec=0
User=core
ExecStartPre=/usr/bin/docker pull sendgridlabs/loggly-docker:1.5
ExecStart=-/usr/bin/docker run --name * -e TOKEN=* -e TAG=***** -p 11001:514 sendgridlabs/loggly-docker:1.5
...

We are using https://www.loggly.com/docs/docker-syslog/ - the container is from sendgrid - https://hub.docker.com/r/sendgridlabs/loggly-docker/ (v1.5)

Note we have another system that using is v1.0 of the container and is not seeing the issue though the load there is not as high potentially. From CoreOS we see the below:

[Wed Sep 28 14:15:08 2016] do_general_protection: 21 callbacks suppressed
[Wed Sep 28 14:15:08 2016] traps: rs:main Q:Reg[11265] general protection ip:7fe9839fe9fe sp:7fe983ad4628 error:0 in ld-musl-x86_64.so.1[7fe9839ab000+87000]

Sep 28 21:09:16 ip-10-51-31-90.ec2.internal kernel: do_general_protection: 18 callbacks suppressed
Sep 28 21:09:16 ip-10-51-31-90.ec2.internal kernel: traps: in:imtcp[31219] general protection ip:7f9693591d9e sp:7f969365ba80 error:0 in ld-musl-x86_64.so.1[7f969353e000+87000]
Sep 28 21:09:16 ip-10-51-31-90.ec2.internal systemd[1]: Stopped docker container 1037c2870025a619e7d491f0173fa92eb23677310a4a8a6532feea2bbe2d214c.

Not sure what that means really - is there some overload in terms of logs being generated or a spike causing this?

@ghost
Copy link

ghost commented Oct 11, 2016

Is there anyway to get this looked at?

@indira-palerra
Copy link

Folks, any chance that this will get fixed soon? We love using your product, please help to get this fixed to improve the over all customer experience pls.

@Shwetajain148
Copy link

Hi @mikerowan, Can you please handle this issue? it's been a long time this is open and affecting many customers.

I'll wait for your positive response. Thanks!

@bnayalivne
Copy link

bnayalivne commented Mar 22, 2020

For anyone else still having this issue, after 1 week the pod was crashing again and again I've found an updated fork of this Docker image that updated the rsyslog configuration (and Loggly's certificate)
https://github.com/Tilkal/loggly-docker

Not sure why pull requests #29 and #26 are not merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants