Essentially I will just post the Dockerfiles and patches as necessary. The base structure is still derived from the Toronto ecosystem team's work, just updated for new version and openjdk 9. It does not use JNA at this time (investigation ongoing), so not using that -- that renders the system call filtering code without effect, but as soon as JNA gets active, seccomp will work. (Updated 12/13)
The Dockerfile for elasticsearch reads:
FROM openjdk:9-jdkA patch named elasticsearch-s390x-seccomp.diff enables seccomp system call filtering and reads -- while JNA used by elasticsearch does not come with s390x support, it will not be effective, but it's the right preparation for that moment... The file goes:
ENV LANG="en_US.UTF-8" JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF8" _JAVA_OPTIONS="-Xmx10g -Dlog4j2.disable.jmx=true" SOURCE_DIR="/tmp/" ANT_HOME=/usr/share/ant/ PATH=$ANT_HOME/bin:$PATH
ENV JDK_JAVA_OPTIONS="--illegal-access=permit"
WORKDIR $SOURCE_DIR
COPY elasticsearch-s390x-seccomp.diff /tmp/
RUN apt-get update && apt-get install -y \
ant autoconf automake ca-certificates ca-certificates-java curl \
git libtool libx11-dev libxt-dev locales-all make maven patch \
pkg-config tar texinfo unzip wget \
&& wget https://services.gradle.org/distributions/gradle-4.3-bin.zip \
&& unzip gradle-4.3-bin.zip \
&& mv gradle-4.3/ /usr/share/gradle \
&& rm -rf gradle-4.3-bin.zip \
&& cd $SOURCE_DIR \
&& git clone https://github.com/elastic/elasticsearch \
&& cd elasticsearch \
&& git checkout v6.0.0 \
&& patch -p1 < /tmp/elasticsearch-s390x-seccomp.diff \
&& export PATH=$PATH:/usr/share/gradle/bin \
&& gradle -Dbuild.snapshot=false assemble -Djavax.net.ssl.trustStore=/usr/lib/jvm/java-9-openjdk-s390x/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit \
&& cd $SOURCE_DIR/elasticsearch/distribution/tar/build/distributions/ \
&& tar -C /usr/share/ -xf elasticsearch-6.0.0.tar.gz \
&& mv /usr/share/elasticsearch-6.0.0 /usr/share/elasticsearch \
&& mv /usr/share/elasticsearch/config/elasticsearch.yml /etc/ \
&& ln -s /etc/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml \
&& apt-get remove -y ant autoconf automake git libtool libx11-dev libxt-dev \
maven patch pkg-config unzip wget \
&& apt-get autoremove -y \
&& apt autoremove -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /usr/share/gradle /root/.gradle/* /tmp/elasticsearch
EXPOSE 9200 9300
ENV PATH=/usr/share/elasticsearch/bin:$PATH
RUN useradd -u 3185 -m elasticsearch \
&& chown -R elasticsearch /usr/share/elasticsearch \
&& mkdir -p /data \
&& chown elasticsearch:elasticsearch /data
USER elasticsearch
CMD ["elasticsearch"]
diff -uNr a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java(Note after the closing curly brace, there is an empty line!).
--- a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java 2017-11-17 16:54:59.349097417 +0100
+++ b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java 2017-11-17 16:59:04.965539359 +0100
@@ -242,6 +242,7 @@
Map<String,Arch> m = new HashMap<>();
m.put("amd64", new Arch(0xC000003E, 0x3FFFFFFF, 57, 58, 59, 322, 317));
m.put("aarch64", new Arch(0xC00000B7, 0xFFFFFFFF, 1079, 1071, 221, 281, 277));
+ m.put("s390x", new Arch(0x80000016, 0xFFFFFFFF, 1, 190, 11, 354, 348));
ARCHITECTURES = Collections.unmodifiableMap(m);
}
For Logstash, Dockerfile reads:
FROM ibmjava:8-sdkThe patch called logstash-tolerate-ibmjava-gc.diff removes a warning that comes from using IBM Java (as a result, no diagnostic garbage collection metrics will be shown). It reads:
WORKDIR "/root"
ENV JAVA_HOME=/opt/ibm/java/jre
COPY logstash-tolerate-ibmjava-gc.diff /tmp/
RUN apt-get update && apt-get install -y \
ant gcc make patch tar unzip wget \
&& wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.zip \
&& unzip -u logstash-6.0.0.zip \
&& cd logstash-6.0.0 \
&& patch -p1 < /tmp/logstash-tolerate-ibmjava-gc.diff \
&& cd .. \
&& wget https://github.com/jnr/jffi/archive/master.zip \
&& unzip master.zip && cd jffi-master && ant && cd .. \
&& mkdir logstash-6.0.0/vendor/jruby/lib/jni/s390x-Linux \
&& cp jffi-master/build/jni/libjffi-1.2.so logstash-6.0.0/vendor/jruby/lib/jni/s390x-Linux/libjffi-1.2.so \
&& cp -r /root/jffi-master /usr/share \
&& cp -r /root/logstash-6.0.0 /usr/share/logstash \
&& apt-get remove -y ant make unzip wget \
&& apt-get autoremove -y && apt-get clean \
&& rm -rf /root/* \
&& rm -rf /var/lib/apt/lists/*
# Disable Java option DisableExplicitGC
RUN sed -i 's/-XX\:+DisableExplicitGC/\# \-XX\:+DisableExplicitGC/g' /usr/share/logstash/config/jvm.options
VOLUME ["/data"]
EXPOSE 514 5000 8202/udp
ENV PATH=/usr/share/logstash/bin:$PATH
ENV LS_JAVA_OPTS="-Xms4g -Xmx10g"
CMD ["logstash","-f","/etc/logstash"]
diff -uNr a/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb b/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rbEventually, Kibana's Dockerfile goes:
--- a/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb 2017-11-10 20:03:40.000000000 +0100
+++ b/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb 2017-11-17 17:33:07.034511906 +0100
@@ -65,9 +65,7 @@
garbage_collectors.each do |collector|
name = GarbageCollectorName.get(collector.getName())
- if name.nil?
- logger.error("Unknown garbage collector name", :name => name)
- else
+ unless name.nil?
metric.gauge([:jvm, :gc, :collectors, name], :collection_count, collector.getCollectionCount())
metric.gauge([:jvm, :gc, :collectors, name], :collection_time_in_millis, collector.getCollectionTime())
end
FROM ibmjava:8-sdkI typically use an elasticsearch.yml like this
WORKDIR "/root"
ENV PATH=/usr/share/node-v6.9.1/bin:/usr/share/kibana/bin:$PATH
RUN apt-get update && apt-get install -y \
apache2 g++ gcc git make nodejs python unzip wget tar \
&& wget https://nodejs.org/dist/v6.9.1/node-v6.9.1-linux-s390x.tar.gz \
&& tar xvzf node-v6.9.1-linux-s390x.tar.gz \
&& mv /root/node-v6.9.1-linux-s390x/ /usr/share/node-v6.9.1 \
&& cd /root/ \
&& wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-linux-x86_64.tar.gz \
&& tar xvf kibana-6.0.0-linux-x86_64.tar.gz \
&& mv /root/kibana-6.0.0-linux-x86_64 kibana-6.0.0 \
&& cd /root/kibana-6.0.0 \
&& mv node node_old \
&& ln -s /usr/share/node-v6.9.1/bin/node node \
&& mkdir /etc/kibana \
&& cp config/kibana.yml /etc/kibana \
&& mv /root/kibana-6.0.0/ /usr/share/kibana \
&& apt-get remove -y git make unzip wget \
&& apt-get autoremove -y && apt-get clean \
&& rm -rf /root/kibana-6.0.0-linux-x86_64.tar.gz /root/node-v6.9.1-linux-s390x.tar.gz \
&& rm -rf /var/lib/apt/lists/*
EXPOSE 5601 80
CMD ["kibana","-H","0.0.0.0"]
cluster.name: my-clusterIn my setup, kibana.yml reads:
path.data: /data
http.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
elasticsearch.url: "http://elasticsearch:9200/"and I start ELK using
docker network create elk(or a docker-compose.yml about like shown here). This setup assumes that elasticsearch-data belongs to a user with uid 3185 -- all log data will be stored at that place. Enjoy Elastic Stack 6.
docker run --name elasticsearch --network=elk -v $PWD/elasticsearch-data:/data -v $PWD/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -p 9200:9200 -p 9300:9300 -d elasticsearch:6.0.0
docker run --name logstash --network=elk -v $PWD/logstash-config:/etc/logstash -p 514:514 -p 5000:5000 -p 8202:8202/udp -d logstash:6.0.0
docker run --name kibana --network=elk -v $PWD/kibana.yml:/usr/share/kibana/config/kibana.yml -p 5601:5601 -d kibana:6.0.0
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.