tag:blogger.com,1999:blog-12522303042784908102024-02-07T04:18:33.830+01:00Linux on IBM Z and ContainersTopics on container technology and its use in Linux on the mainframeUtz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.comBlogger70125tag:blogger.com,1999:blog-1252230304278490810.post-1249036870040644412018-11-28T22:36:00.002+01:002018-11-28T22:36:33.729+01:00The Journey ContinuesThis blog has been silent for a while. Reason is that I moved into a different role at IBM, with more focus on Cloud solutions, and less focus on the mainframe. Therefore, the blog will remain silent. I will let the blog active, as I still see quite a number of hits from search engines, so some articles seem a good reference.<br />
<br />
However, there is a now blog on the horizon, that continues the exciting topic how Mainframe and Cloud technologies are a great complement: Alice Frosi blogs at <a href="https://containersonibmz.com/">https://containersonibmz.com/</a>. I strongly suggest to take a look there and follow Alice's blog, as she is right at the cutting edge of this journey.<br />
<br />
Thank you for visiting this place, for your questions and feedback. Enjoy how the future is shaped.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-16294591327733269122018-03-13T11:46:00.001+01:002018-03-13T11:46:49.206+01:00Run containers in separate virtual machinesThe high end version of <a href="https://www.ibm.com/blockchain/platform/" target="_blank">IBM Blockchain Platform</a> uses a combination of virtualization and containerization to crank up isolation attributes of business networks a bit. We have now released the underlying technology into Open Source: <a href="https://github.com/gotoz/runq" target="_blank">runq on github</a>.<br />
<br />
runq allows to start containers in a slightly different fashion in a Docker environment:<br />
<a name='more'></a>As soon as the container is started, a KVM guest will be spun up under the covers. Inside that guest, a minimal Linux environment is booted which then runs the container workload. All this happens quite transparently, and container images can be reused without change (unless they do weird things).<br />
<br />
There have been similar approaches for this goal, most notable <a href="https://clearlinux.org/containers" target="_blank">Clear Containers</a> and <a href="https://github.com/hyperhq/runv" target="_blank">runv</a> (and the combination of these which puts it into an OpenStack context: <a href="https://katacontainers.io/" target="_blank">kata</a>). In contrast, runq focusses on minimalism: it conciously refrains from implementing some features which would blow up the code, and very much aligns to <a href="https://github.com/opencontainers/runc" target="_blank">runc</a>, which is the original runtime of container/docker/kubernetes and friends. At its core, runq is few lines (relatively speaking). Checkout the <a href="https://github.com/gotoz/runq" target="_blank">project github page</a> for a short summary of goals.<br />
<br />
This simplicity is visible when installing and working with runq: use your IBM Z or x86 environment and follow a few simple steps on <a href="https://github.com/gotoz/runq">https://github.com/gotoz/runq</a> and you are ready to start deploying containers in virtual machines. runq is built in containers (with <span style="font-family: "courier new" , "courier" , monospace;">make release release-install</span>), so no build prereqs necessary.<br />
<br />
Note for RHEL users: your kernel must support KVM -- this is typically the case for recent distros that carry a 4.x kernel. RHEL 7.4 is currently out of luck, but the public RHEL 7.5 beta documentation raises hopes.<br />
<br />
Note for SLES users: if you have not installed SUSE's qemu package, you need to set "sysctl vm.allocate_pgste=1" (e.g. write that setting into a file in /etc/sysctl.d/)<br />
<br />
If you are currently not hit by that, go have a try -- <a href="https://github.com/gotoz/runq" target="_blank">it's simple and quick to try</a>.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-65534319648262011112018-03-09T20:49:00.000+01:002018-03-09T20:49:49.060+01:00Handy Search Engine for Docker Hub Images<a href="http://soaphub.org/imagehub/">http://soaphub.org/imagehub/</a> provides a nice way to search Docker Hub. You can search for strings in name spaces, and the result also displays what architectures the image is backed by.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-80365969404621506312018-03-08T22:12:00.001+01:002018-03-08T22:12:20.049+01:00ClefOS is officialOr more precisely, <a href="https://hub.docker.com/_/clefos/" target="_blank">ClefOS is an official image</a>. Official image means, it is part of curated content on Docker Hub. It is also (AFAIK) the first official image that has no support yet for other platforms. Well, this might be a dubious comparison since it is actually an equivalent to CentOS which does not have any s390x backing. ClefOS can't be named CentOS, since Neale Ferguson and company at Sine Nomine Associates are not part of the CentOS organization. However, as you can see it is very closely following it:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">$ docker run -ti clefos</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">Unable to find image 'clefos:latest' locally</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">latest: Pulling from library/clefos</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">26dbd8e1d5ff: Pull complete</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">Digest: sha256:8e89216b23e7a5716a7e31de352ed777769738d258a1e20cc9cff06e39316717</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">Status: Downloaded newer image for clefos:latest</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">bash-4.2# cat /etc/os-release</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">NAME="CentOS Linux"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">VERSION="7 (Core)"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">ID="centos"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">ID_LIKE="rhel fedora"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">VERSION_ID="7"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">PRETTY_NAME="CentOS Linux 7 (Core)"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">ANSI_COLOR="0;31"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">CPE_NAME="cpe:/o:centos:centos:7"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">HOME_URL="https://www.centos.org/"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">BUG_REPORT_URL="https://bugs.centos.org/"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">CENTOS_MANTISBT_PROJECT="CentOS-7"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">CENTOS_MANTISBT_PROJECT_VERSION="7"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">REDHAT_SUPPORT_PRODUCT="centos"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">REDHAT_SUPPORT_PRODUCT_VERSION="7"</span></span><br /><span style="font-family: "Courier New",Courier,monospace;"></span><br /><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">bash-4.2#</span></span></blockquote>
This gives you a great option for enabling CentOS (or RHEL) based images on IBM Z. The identification above is technically very sane, since some projects check the distribution, and for some reason do not know ClefOS, but know CentOS.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-43632055680305143442017-11-17T22:22:00.000+01:002017-12-13T23:00:00.407+01:00ELK Revisited, Version 6A <a href="http://containerz.blogspot.com/2017/09/elastic-stack-on-z.html" target="_blank">previous post</a> showed how the Elastic Stack can be used on Linux on Z. It based on version 5.5.2 these days. If you are looking at using the latest version 6.0.0, read on...<br />
<a name='more'></a>Essentially I will just post the Dockerfiles and patches as necessary. The base structure is still derived from the Toronto ecosystem team's work, just updated for new version and openjdk 9. It does not use JNA at this time (investigation ongoing), so not using that -- that renders the system call filtering code without effect, but as soon as JNA gets active, seccomp will work. (Updated 12/13)<br />
The <i>Dockerfile </i>for elasticsearch reads:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">FROM openjdk:9-jdk<br />ENV LANG="en_US.UTF-8" JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF8" _JAVA_OPTIONS="-Xmx10g -Dlog4j2.disable.jmx=true" SOURCE_DIR="/tmp/" ANT_HOME=/usr/share/ant/ PATH=$ANT_HOME/bin:$PATH<br />ENV JDK_JAVA_OPTIONS="--illegal-access=permit"<br />WORKDIR $SOURCE_DIR<br /><br />COPY elasticsearch-s390x-seccomp.diff /tmp/<br />RUN apt-get update && apt-get install -y \<br /> ant autoconf automake ca-certificates ca-certificates-java curl \<br /> git libtool libx11-dev libxt-dev locales-all make maven patch \<br /> pkg-config tar texinfo unzip wget \<br /> && wget https://services.gradle.org/distributions/gradle-4.3-bin.zip \<br /> && unzip gradle-4.3-bin.zip \<br /> && mv gradle-4.3/ /usr/share/gradle \<br /> && rm -rf gradle-4.3-bin.zip \<br /> && cd $SOURCE_DIR \<br /> && git clone https://github.com/elastic/elasticsearch \<br /> && cd elasticsearch \<br /> && git checkout v6.0.0 \<br /> && patch -p1 < /tmp/elasticsearch-s390x-seccomp.diff \<br /> && export PATH=$PATH:/usr/share/gradle/bin \<br /> && gradle -Dbuild.snapshot=false assemble -Djavax.net.ssl.trustStore=/usr/lib/jvm/java-9-openjdk-s390x/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit \<br /> && cd $SOURCE_DIR/elasticsearch/distribution/tar/build/distributions/ \<br /> && tar -C /usr/share/ -xf elasticsearch-6.0.0.tar.gz \<br /> && mv /usr/share/elasticsearch-6.0.0 /usr/share/elasticsearch \<br /> && mv /usr/share/elasticsearch/config/elasticsearch.yml /etc/ \<br /> && ln -s /etc/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml \<br /> && apt-get remove -y ant autoconf automake git libtool libx11-dev libxt-dev \<br /> maven patch pkg-config unzip wget \<br /> && apt-get autoremove -y \<br /> && apt autoremove -y \<br /> && apt-get clean \<br /> && rm -rf /var/lib/apt/lists/* /usr/share/gradle /root/.gradle/* /tmp/elasticsearch<br /><br />EXPOSE 9200 9300<br /><br />ENV PATH=/usr/share/elasticsearch/bin:$PATH<br /><br />RUN useradd -u 3185 -m elasticsearch \<br /> && chown -R elasticsearch /usr/share/elasticsearch \<br /> && mkdir -p /data \<br /> && chown elasticsearch:elasticsearch /data<br /><br />USER elasticsearch<br />CMD ["elasticsearch"]</span></span></span></blockquote>
A patch named <i>elasticsearch-s390x-seccomp.diff</i> enables seccomp system call filtering and reads -- while JNA used by elasticsearch does not come with s390x support, it will not be effective, but it's the right preparation for that moment... The file goes:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">diff -uNr a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">--- a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java 2017-11-17 16:54:59.349097417 +0100</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">+++ b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java 2017-11-17 16:59:04.965539359 +0100</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">@@ -242,6 +242,7 @@</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> Map<String,Arch> m = new HashMap<>();</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> m.put("amd64", new Arch(0xC000003E, 0x3FFFFFFF, 57, 58, 59, 322, 317));</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> m.put("aarch64", new Arch(0xC00000B7, 0xFFFFFFFF, 1079, 1071, 221, 281, 277));</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">+ m.put("s390x", new Arch(0x80000016, 0xFFFFFFFF, 1, 190, 11, 354, 348));</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> ARCHITECTURES = Collections.unmodifiableMap(m);</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> }</span><span style="font-family: "courier new" , "courier" , monospace;"></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"></span></span></blockquote>
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"></span></span>(Note after the closing curly brace, there is an empty line!).<br />
For Logstash, <i>Dockerfile </i>reads:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new" , "courier" , monospace;">FROM ibmjava:8-sdk<br />WORKDIR "/root"<br /><br />ENV JAVA_HOME=/opt/ibm/java/jre<br />COPY logstash-tolerate-ibmjava-gc.diff /tmp/<br />RUN apt-get update && apt-get install -y \<br /> ant gcc make patch tar unzip wget \<br /> && wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.zip \<br /> && unzip -u logstash-6.0.0.zip \<br /> && cd logstash-6.0.0 \<br /> && patch -p1 < /tmp/logstash-tolerate-ibmjava-gc.diff \<br /> && cd .. \<br /> && wget https://github.com/jnr/jffi/archive/master.zip \<br /> && unzip master.zip && cd jffi-master && ant && cd .. \<br /> && mkdir logstash-6.0.0/vendor/jruby/lib/jni/s390x-Linux \<br /> && cp jffi-master/build/jni/libjffi-1.2.so logstash-6.0.0/vendor/jruby/lib/jni/s390x-Linux/libjffi-1.2.so \<br /> && cp -r /root/jffi-master /usr/share \<br /> && cp -r /root/logstash-6.0.0 /usr/share/logstash \<br /> && apt-get remove -y ant make unzip wget \<br /> && apt-get autoremove -y && apt-get clean \<br /> && rm -rf /root/* \<br /> && rm -rf /var/lib/apt/lists/*<br /><br /># Disable Java option DisableExplicitGC<br />RUN sed -i 's/-XX\:+DisableExplicitGC/\# \-XX\:+DisableExplicitGC/g' /usr/share/logstash/config/jvm.options<br /><br />VOLUME ["/data"]<br /><br />EXPOSE 514 5000 8202/udp<br /><br />ENV PATH=/usr/share/logstash/bin:$PATH<br />ENV LS_JAVA_OPTS="-Xms4g -Xmx10g"<br /><br />CMD ["logstash","-f","/etc/logstash"]</span></span></span></blockquote>
The patch called <i>logstash-tolerate-ibmjava-gc.diff</i> removes a warning that comes from using IBM Java (as a result, no diagnostic garbage collection metrics will be shown). It reads:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">diff -uNr a/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb b/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">--- a/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb 2017-11-10 20:03:40.000000000 +0100</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">+++ b/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb 2017-11-17 17:33:07.034511906 +0100</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">@@ -65,9 +65,7 @@</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> garbage_collectors.each do |collector|</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> name = GarbageCollectorName.get(collector.getName())</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">- if name.nil?</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">- logger.error("Unknown garbage collector name", :name => name)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">- else</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">+ unless name.nil?</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> metric.gauge([:jvm, :gc, :collectors, name], :collection_count, collector.getCollectionCount())</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> metric.gauge([:jvm, :gc, :collectors, name], :collection_time_in_millis, collector.getCollectionTime())</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> end</span></span></blockquote>
Eventually, Kibana's <i>Dockerfile </i>goes:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">FROM ibmjava:8-sdk<br />WORKDIR "/root"<br />ENV PATH=/usr/share/node-v6.9.1/bin:/usr/share/kibana/bin:$PATH<br /><br />RUN apt-get update && apt-get install -y \<br /> apache2 g++ gcc git make nodejs python unzip wget tar \<br /> && wget https://nodejs.org/dist/v6.9.1/node-v6.9.1-linux-s390x.tar.gz \<br /> && tar xvzf node-v6.9.1-linux-s390x.tar.gz \<br /> && mv /root/node-v6.9.1-linux-s390x/ /usr/share/node-v6.9.1 \<br /> && cd /root/ \<br /> && wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-linux-x86_64.tar.gz \<br /> && tar xvf kibana-6.0.0-linux-x86_64.tar.gz \<br /> && mv /root/kibana-6.0.0-linux-x86_64 kibana-6.0.0 \<br /> && cd /root/kibana-6.0.0 \<br /> && mv node node_old \<br /> && ln -s /usr/share/node-v6.9.1/bin/node node \<br /> && mkdir /etc/kibana \<br /> && cp config/kibana.yml /etc/kibana \<br /> && mv /root/kibana-6.0.0/ /usr/share/kibana \<br /> && apt-get remove -y git make unzip wget \<br /> && apt-get autoremove -y && apt-get clean \<br /> && rm -rf /root/kibana-6.0.0-linux-x86_64.tar.gz /root/node-v6.9.1-linux-s390x.tar.gz \<br /> && rm -rf /var/lib/apt/lists/*<br /><br />EXPOSE 5601 80<br /><br />CMD ["kibana","-H","0.0.0.0"]</span></span></blockquote>
I typically use an <i>elasticsearch.yml</i> like this<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">cluster.name: my-cluster</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">path.data: /data</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">http.host: 0.0.0.0</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">discovery.zen.minimum_master_nodes: 1</span></span></blockquote>
In my setup, <i>kibana.yml</i> reads:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">elasticsearch.url: "http://elasticsearch:9200/"</span></span></blockquote>
and I start ELK using<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">docker network create elk<br />docker run --name elasticsearch --network=<span style="font-family: "courier new" , "courier" , monospace;">elk </span>-v $PWD/elasticsearch-data:/data -v $PWD/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -p 9200:9200 -p 9300:9300 -d elasticsearch:6.0.0<br />docker run --name logstash </span></span><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">--network=<span style="font-family: "courier new" , "courier" , monospace;">elk</span></span></span> -v $PWD/logstash-config:/etc/logstash -p 514:514 -p 50<span style="font-family: "courier new" , "courier" , monospace;">00</span>:50<span style="font-family: "courier new" , "courier" , monospace;">00 </span>-p 8202:8202/udp -d logstash:6.0.0<br />docker run --name kibana </span></span><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">--network=<span style="font-family: "courier new" , "courier" , monospace;">elk </span></span></span>-v $PWD/kibana.yml:/usr/share/kibana/config/kibana.yml -p 5601:5601 -d kibana:6.0.0</span></span></blockquote>
(or a <i>docker-compose.yml</i> about like shown <a href="http://containerz.blogspot.com/2017/09/elastic-stack-on-z.html" target="_blank">here</a>). This setup assumes that <i>elasticsearch-data</i> belongs to a user with uid 3185 -- all log data will be stored at that place. Enjoy Elastic Stack 6.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-19873320625263428332017-11-14T23:21:00.000+01:002017-11-14T23:27:44.618+01:00Portainer, SupportedThe <a href="http://containerz.blogspot.com/2017/11/portainer-revisited.html" target="_blank">previous post</a> showed that <a href="https://portainer.io/" target="_blank">portainer</a> is now available for s390x. Today, the project announced a <a href="https://portainer.io/support.html" target="_blank">support offering</a> which can also apply to IBM Z.<br />
This is 22 days from the first PR to the project releasing for s390x and announcing support! Thanks, portainer.io-Team, this is amazing.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-6388826983700878262017-11-08T15:05:00.000+01:002017-11-08T15:05:14.548+01:00Portainer -- RevisitedA <a href="http://containerz.blogspot.com/2017/10/portainer.html" target="_blank">previous post</a> described how to run portainer on z. Starting today, s390x support is part of their Docker Hub image. This makes the task a bit easier. Enter<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;">docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v /opt/portainer:/data portainer/portainer</span></blockquote>
and point your browser to port 9000. Done.<br />
Kudos to the portainer.io team for integrating s390x support so quickly!Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-23602264809075970982017-10-26T23:19:00.005+02:002017-10-26T23:19:53.872+02:00Next Step: Alpine is multi-archAs of today, the multi-arch manifest of <i>alpine </i>points to several architectures, including s390x. This means, you can now re-use all the <span style="font-family: "Courier New",Courier,monospace;">Dockerfile</span>s saying "<span style="font-family: "Courier New",Courier,monospace;">FROM alpine</span>" without changes (no "s390x/"). Or, of course, things like "<span style="font-family: "Courier New",Courier,monospace;">docker run -ti alpine sh</span>" work without an <span style="font-family: "Courier New",Courier,monospace;">s390x/</span>.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-45257763419544013582017-10-24T00:36:00.000+02:002017-11-08T15:06:57.259+01:00Portainer<a href="https://portainer.io/" target="_blank">Portainer</a> is one of the major Open Source tools for graphically managing Docker environments. You can use portainer on any machine and direct it to a Docker API endpoint to manage s390x.<br />
<br />
<b>Update 2017/11/08: starting today, portainer/portainer comes with s390x support. Check out <a href="http://containerz.blogspot.com/2017/11/portainer-revisited.html">containerz.blogspot.com/2017/11/portainer-revisited.html</a> for details; the steps below are not required anymore to run portainer.</b><br />
<br />
If you prefer running it on s390x, there are some steps needed, as long as portainer doesn't build it for s390x:<br />
<a name='more'></a>Install all the pre-reqs (golang, node; also docker needs to be at a <a href="http://containerz.blogspot.com/2017/09/docker-ce-1709-available.html" target="_blank">decent level >= 17.05</a> to support multi-stage Dockerfiles):<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">apt-get install -y golang npm nodejs-legacy<br />npm install -g bower grunt<br />npm install autoprefixer cssnano load-grunt-tasks \<br /> grunt-config grunt-contrib-clean grunt-contrib-concat \<br /> grunt-contrib-copy grunt-contrib-jshint \<br /> grunt-contrib-uglify grunt-contrib-watch \<br /> grunt-filerev grunt-html2js grunt-karma \<br /> grunt-postcss grunt-replace grunt-shell grunt-usemin</span></blockquote>
(apt is Ubuntu-specific, use appropriate tooling to install on other distributions).<br />
Next, the build and portainer base container need to be built:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">git clone https://github.com/portainer/golang-builder.git<br />cd golang-builder/builder-cross<br />docker build -t portainer/golang-builder:cross-platform .<br />cd ../..</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">git clone https://github.com/portainer/docker-images.git<br />cd docker-images/base-docker-binary</span></blockquote>
Update 10/26: alpine is multi-arch, so no need anymore to change the Dockerfile.<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">docker build -t portainer/base .</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">cd ../..</span></blockquote>
So let's build portainer:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">git clone https://github.com/portainer/portainer.git<br />cd portainer<br />git checkout 1.15.0</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">bower install --allow-root<br />npm install</span></blockquote>
Now <span style="font-family: "courier new" , "courier" , monospace;">gruntfile.js</span> contains <span style="font-family: "courier new" , "courier" , monospace;">amd64</span> and should have <span style="font-family: "courier new" , "courier" , monospace;">s390x</span> -- replace the first three occurrences, i.e. in <i>grunt.registerTask</i> and the <i>docker run</i> command). A PR for (a cleaner implementation of) that is merged upstream meanwhile, so with 1.15.1, this will not be needed anymore.<br />
If you like, you can now use<span style="font-family: "courier new" , "courier" , monospace;"> grunt build <span style="font-family: inherit;"></span></span><span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: inherit;"></span></span>and<span style="font-family: "courier new" , "courier" , monospace;"> <span style="font-family: "courier new" , "courier" , monospace;">g</span>runt run-dev </span>to run portainer locally without building a container image. To build a container image, edit build.sh: at the end of the file, the <i>build_all</i> statement declares the target platforms. Make this a<span style="font-family: "courier new" , "courier" , monospace;"> build_all 'linux-s390x' (</span>or just add <i>linux-s390x</i> to the list to build for all platforms). This last change is upstream already.<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">./build.sh 1.15.0</span></blockquote>
To run portainer, create a data directory and start the container:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v /opt/portainer:/data portainer/portainer:linux-s390x</span></blockquote>
You can now point your browser to port 9000 and use portainer:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmxTI1r-PpQtB2ng4Hn9dwGpxR8X1vzXhFZc7LOkVqCjahmz09ng61DIaewEn8W1OY5UNctywWDxvOZTCPOSCMZWn8c3hnnQqjnlk1zDZ4VV4HAw63QL4FNBv3YEtoKoIHyYw8Xs2TKo0/s1600/portainer-images.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1008" data-original-width="1600" height="403" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmxTI1r-PpQtB2ng4Hn9dwGpxR8X1vzXhFZc7LOkVqCjahmz09ng61DIaewEn8W1OY5UNctywWDxvOZTCPOSCMZWn8c3hnnQqjnlk1zDZ4VV4HAw63QL4FNBv3YEtoKoIHyYw8Xs2TKo0/s640/portainer-images.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4M5hrNS5B2TOSUBkMRbi9XSiCWencQ9zNOtxHqqOdfrgsozar3qhtCHbA87lWjI2JPNMTT5LV7lU2uJYgjuB_UQPKbRMp01jErzC8fKx0PGF5qFbDQYcckIcujqg3Aa8Q4upehaKWVUw/s1600/portainer-containers.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1008" data-original-width="1600" height="402" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4M5hrNS5B2TOSUBkMRbi9XSiCWencQ9zNOtxHqqOdfrgsozar3qhtCHbA87lWjI2JPNMTT5LV7lU2uJYgjuB_UQPKbRMp01jErzC8fKx0PGF5qFbDQYcckIcujqg3Aa8Q4upehaKWVUw/s640/portainer-containers.png" width="640" /></a></div>
PS: someone asked about the node info... here are two screenshots:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgU_MtcFTcXLsuHJdz7Co89cR_IgvI3bQj6Io4-Ip2EorCiceLQNz08YkXtYDaa0b5ZRrOJ9Qta2gfZbK3bUibGAwC_VvKdeUJILt60Q2yRVZgZoezOiLPuYid0xwEMmeGqeDBrNi1rXu8/s1600/portainer-gist-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="987" data-original-width="1600" height="394" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgU_MtcFTcXLsuHJdz7Co89cR_IgvI3bQj6Io4-Ip2EorCiceLQNz08YkXtYDaa0b5ZRrOJ9Qta2gfZbK3bUibGAwC_VvKdeUJILt60Q2yRVZgZoezOiLPuYid0xwEMmeGqeDBrNi1rXu8/s640/portainer-gist-1.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHWPjjyEtsprUr4lLxJYhYhPpZUTcTM40_lv7Z1ez5aMhL6KWpkHzBvAb38FlE2tkymLmYITImMEhKJzp7QYzIzqerDF4hvv64Fu3H4B7dMlQBeHjrotNWWtalN6KvSSJncjx_GiN2wDs/s1600/portainer-gist-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="987" data-original-width="1600" height="394" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHWPjjyEtsprUr4lLxJYhYhPpZUTcTM40_lv7Z1ez5aMhL6KWpkHzBvAb38FlE2tkymLmYITImMEhKJzp7QYzIzqerDF4hvv64Fu3H4B7dMlQBeHjrotNWWtalN6KvSSJncjx_GiN2wDs/s640/portainer-gist-2.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhomXiJgrR6Il29g-wQyxCIoTj8-GTiG10PglT9v2m8zpQyoqBbLdNW83NYDB_XOsb6TL1yx7i3rKVLMcfX6LxlqvGS2wIHfVUbDlpxYNSlQVE-K-JUik4wrmvSxg5Wsd6WxKCYP98n0Ao/s1600/portainer-gist-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1140" data-original-width="1600" height="456" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhomXiJgrR6Il29g-wQyxCIoTj8-GTiG10PglT9v2m8zpQyoqBbLdNW83NYDB_XOsb6TL1yx7i3rKVLMcfX6LxlqvGS2wIHfVUbDlpxYNSlQVE-K-JUik4wrmvSxg5Wsd6WxKCYP98n0Ao/s640/portainer-gist-3.png" width="640" /></a></div>
<br />Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-33464367934255688462017-10-23T23:34:00.000+02:002017-10-24T07:59:40.147+02:00Docker and Kubernetes. Kubernetes and Docker.Kubernetes has become <i>the</i> trending orchestration solution for containers. All big Cloud providers bet on it, and even in smaller companies, k8s (written that way as there are 8 letters between <i>k</i> and <i>s</i>) is what all the hip developers go for.<br />
Docker is still used to build all the images, and they have made it to a commercial product used by many customers.<br />
Kubernetes has focused on being an extensible and scalable framework, and is still growing fast; it has a credible reputation for managing data center scale with best possible control. In fact, <i>control </i>is IMHO the word that describes its nature best.<br />
Docker has chosen to put user experience first: it provides rich functionality with sane defaults, but users don't have to think about it -- Docker simply and quickly does what you mean. <i>User experience</i> is (IMHO) Docker's core characteristics.<br />
<a name='more'></a><br />
Over time, Docker has picked up a lot of features that used to be specific to Kubernetes, and Kubernetes attempted to become easier to use (clusters can now quickly be spun up, but you still need a bunch of yaml files to get anything going).<br />
There are two paradigms out there: the "GIFEE" (Google Infrastructure for Everyone Else) crowd claims that if Google and friends are using it, it can't be wrong. Why would anyone want a less stable, scalable and yet flexible infrastructure? The opposite stance is "You are not Google". Which has got more truth to it that it appears at first glance.<br />
At last week's DockerCon EU, a very interesting move was announced: <a href="https://www.docker.com/kubernetes" target="_blank">moby and docker will support Kubernetes as an orchestration layer (optional, sitting next to swarm)</a>, and the Docker and Kubernetes communities will get closer to each other. Docker intends to <a href="https://blog.docker.com/2017/10/kubernetes-docker-platform-and-moby-project/" target="_blank">add Kubernetes support into the upstream and open source projects</a> <a href="https://blog.docker.com/2017/10/Docker-enterprise-edition-kubernetes/" target="_blank">as well as their commercial products</a>, while Kubernetes maintainers repeatedly welcomed the Docker community in theirs ; it felt they wanted or needed to emphasize their independence; also, there is no blog entry of theirs on that topic yet. Remember: if you like kubernetes as-is, the Docker integration does probably not add a lot of value.<br />
While a lot of technical questions about how this integration would look exactly yet remain unanswered, Docker seemed to put user experience first and chose the hardest way: a full side-by-side operation at the CLI/API layer, as well as the ability to manage Kubernetes clusters in Docker Enterprise Edition (in Universal Control Plane, that is).<br />
After a true integration of Kubernetes orchestration in the docker stack, two orchestration layers are one too many. Until that point, we can expect two things: a lot of work happening, and the best is yet to come.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-68981302319468251862017-09-28T16:12:00.000+02:002017-09-28T16:12:00.200+02:00Docker CE 17.09 availableDocker, Inc. has released a new version of their Community Edition engine: 17.09. It includes IBM Z support and is <a href="https://store.docker.com/search?architecture=s390x&offering=community&type=edition" target="_blank">available on Docker Store</a> as builds for their preferred CE distributions (Ubuntu at this time), or as <a href="https://download.docker.com/linux/static/stable/s390x/" target="_blank">statically linked binary</a> for all distributions. The same considerations for installation as for 17.06 (<a href="http://containerz.blogspot.com/2017/06/first-ce-for-s390x-by-docker.html" target="_blank">Ubuntu</a>/<a href="http://containerz.blogspot.com/2017/06/docker-ce-for-all-distributions.html" target="_blank">static</a>) apply for 17.09.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-17898117122665521512017-09-20T18:07:00.000+02:002017-09-20T18:07:04.803+02:00IBM Z images on Docker StoreAs mentioned in a <a href="http://containerz.blogspot.com/2017/09/docker-official-images-go-multi-arch.html" target="_blank">previous post</a>, an increasing number of Docker "official images" include s390x binaries. This is now also marked accordingly on <a href="https://store.docker.com/" target="_blank">Docker Store</a>. If you search for containers, there is an "IBM Z" checkbox. Activating it filters for s390x images.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-84820968398085753722017-09-17T20:00:00.000+02:002017-09-20T15:31:50.379+02:00Elastic Stack on ZThe Elastic Stack, also known as ELK stack, is a popular choice to manage logs. ELK is an acronym for its three main components Elasticsearch, Logstash and Kibana; Elastic Stack is the more recent name for it. ELK is written in Java and maintained by <a href="https://www.elastic.co/" target="_blank">Elastic</a>. The three building blocks have a clear separation of duty:<br />
<ul>
<li>Elasticsearch is a database for storing</li>
<li>Logstash ingests logs in various formats and can transform them for efficient processing with Elasticsearch</li>
<li>Kibana is a graphical, web-based front end to Elasticsearch</li>
</ul>
E, L and K can operate in a Linux on IBM Z environment. <a href="https://www.ibm.com/ms-en/marketplace/common-data-provider-for-z-systems" target="_blank">IBM's Common Data Provider</a> can even handle z/OS logs like SMF data. Here's how to run ELK on the mainframe -- of course in containers:<br />
<a name='more'></a>First, we need to create the E, L and K containers. A good starting point is <a href="https://github.com/linux-on-ibm-z/dockerfile-examples/">https://github.com/linux-on-ibm-z/dockerfile-examples/</a>. To optimize on the Java VM used (and thus ELK performance), we can tweak these Dockerfiles a bit.<br />
<br />
Let's start with Elasticsearch: this application is known to not work with IBM Java. So for the mainframe, OpenJDK is the choice. With version 9 of OpenJDK (to be officially released in just a few days), s390x has got a Just-in-time compiler (JIT) in the Java Virtual Machine. Obviously, that is a prerequisite for decent performance. A few tweaks are necessary when building Elasticsearch, since its code is not yet considering OpenJDK 9 a lot.<br />
The official image of openjdk already provides a JITting Java Virtual Machine, so building Elasticsearch can be done with <span style="font-family: "courier new" , "courier" , monospace;">Dockerfile </span>like this one:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">FROM openjdk:9-jdk<br />ADD gradle.diff /tmp<br />ENV LANG="en_US.UTF-8" JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF8" _JAVA_OPTIONS="-Xmx10g" SOURCE_DIR="/tmp/"<br />ENV JDK_JAVA_OPTIONS="--illegal-access=permit"<br />WORKDIR $SOURCE_DIR<br /><br />RUN apt-get update && apt-get install -y \<br /> ant autoconf automake ca-certificates ca-certificates-java curl \<br /> git libtool libx11-dev libxt-dev locales-all make maven patch \<br /> pkg-config tar texinfo unzip wget \<br /> && wget https://services.gradle.org/distributions/gradle-3.3-bin.zip \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>&& unzip gradle-3.3-bin.zip \<br /> && mv gradle-3.3/ /usr/share/gradle \<br /> && rm -rf gradle-3.3-bin.zip \<br /># Download and build source code of elastic search<br /> && cd $SOURCE_DIR \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>&& git clone https://github.com/elastic/elasticsearch \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>&& cd elasticsearch \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>&& git checkout v5.5.2 \<br /> && patch -p1 < /tmp/gradle.diff \<br /> && export PATH=$PATH:/usr/share/gradle/bin \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>&& gradle -Dbuild.snapshot=false assemble -Djavax.net.ssl.trustStore=/usr/lib/jvm/java-9-openjdk-s390x/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit \<br /> && cd $SOURCE_DIR/elasticsearch/distribution/tar/build/distributions/ \<br /> && tar -C /usr/share/ -xf elasticsearch-5.5.2.tar.gz \<br /> && mv /usr/share/elasticsearch-5.5.2 /usr/share/elasticsearch \<br /> && mv /usr/share/elasticsearch/config/elasticsearch.yml /etc/ \<br /> && ln -s /etc/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml \<br /># Clean up cache data and remove dependencies that are not required<br /> && apt-get remove -y ant autoconf automake git libtool libx11-dev libxt-dev \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>maven patch pkg-config unzip wget \<br /> && apt-get autoremove -y \<br /> && apt autoremove -y \<br /> && apt-get clean \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>&& rm -rf /var/lib/apt/lists/* /usr/share/gradle /root/.gradle/* /tmp/elasticsearch<br /><br />EXPOSE 9200 9300<br /><br />ENV PATH=/usr/share/elasticsearch/bin:$PATH<br /><br />CMD ["elasticsearch"]</span></span></blockquote>
In the build directory, <span style="font-family: "courier new" , "courier" , monospace;">gradle.diff</span> needs to be present -- that is required to address a glitch of gradle with openjdk 9:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">diff -Bub a/build.gradle b/build.gradle</span><br /><span style="font-family: "courier new" , "courier" , monospace;">--- a/build.gradle 2017-09-11 21:16:10.455783363 +0000</span><br /><span style="font-family: "courier new" , "courier" , monospace;">+++ b/build.gradle 2017-09-11 21:18:47.995590949 +0000</span><br /><span style="font-family: "courier new" , "courier" , monospace;">@@ -158,7 +158,7 @@</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> }</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> }</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> // ignore missing javadocs</span><br /><span style="font-family: "courier new" , "courier" , monospace;">- tasks.withType(Javadoc) { Javadoc javadoc -></span><br /><span style="font-family: "courier new" , "courier" , monospace;">+ tasks.withType(Javadoc) { enabled=false } /* Javadoc javadoc -></span><br /><span style="font-family: "courier new" , "courier" , monospace;"> // the -quiet here is because of a bug in gradle, in that adding a string option</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> // by itself is not added to the options. By adding quiet, both this option and</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> // the "value" -quiet is added, separated by a space. This is ok since the javadoc</span><br /><span style="font-family: "courier new" , "courier" , monospace;">@@ -166,15 +166,15 @@</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> // see https://discuss.gradle.org/t/add-custom-javadoc-option-that-does-not-take-an-argument/5959</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> javadoc.options.encoding='UTF8'</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> javadoc.options.addStringOption('Xdoclint:all,-missing', '-quiet')</span><br /><span style="font-family: "courier new" , "courier" , monospace;">- /*</span><br /><span style="font-family: "courier new" , "courier" , monospace;">+ / *</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> TODO: building javadocs with java 9 b118 is currently broken with weird errors, so</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> for now this is commented out...try again with the next ea build...</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> javadoc.executable = new File(project.javaHome, 'bin/javadoc')</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> if (project.javaVersion == JavaVersion.VERSION_1_9) {</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> // TODO: remove this hack! gradle should be passing this...</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> javadoc.options.addStringOption('source', '8')</span><br /><span style="font-family: "courier new" , "courier" , monospace;">- }*/</span><br /><span style="font-family: "courier new" , "courier" , monospace;">- }</span><br /><span style="font-family: "courier new" , "courier" , monospace;">+ } * /</span><br /><span style="font-family: "courier new" , "courier" , monospace;">+ } */</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> }</span><br /><span style="font-family: "courier new" , "courier" , monospace;"></span><br /><span style="font-family: "courier new" , "courier" , monospace;"> /* Sets up the dependencies that we build as part of this project but</span></span></blockquote>
(credits for the diff to the Toronto porting team, who also created and manages the initial Dockerfile at the link above). Put both files in a directory and compile the container image for Elasticsearch.<br />
<br />
For Logstash, we can use IBM Java (also a Docker official image), since it won't build with Java 9 at this time. This Dockerfile does the trick:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">FROM ibmjava:8-sdk<br />WORKDIR "/root"<br />ENV JAVA_HOME=/opt/ibm/java/jre<br />RUN apt-get update && apt-get install -y \<br /> ant gcc make tar unzip wget \<br /># Download the logstash source from github and build it<br /> && wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.2.zip \<br /> && unzip -u logstash-5.5.2.zip \<br /> && wget https://github.com/jnr/jffi/archive/master.zip \<br /> && unzip master.zip && cd jffi-master && ant && cd .. \<br /> && mkdir logstash-5.5.2/vendor/jruby/lib/jni/s390x-Linux \<br /> && cp jffi-master/build/jni/libjffi-1.2.so logstash-5.5.2/vendor/jruby/lib/jni/s390x-Linux/libjffi-1.2.so \<br /> && cp -r /root/jffi-master /usr/share \<br /> && cp -r /root/logstash-5.5.2 /usr/share/logstash \<br /># Cleanup cache data, unused packages and source files<br /> && apt-get remove -y ant make unzip wget \<br /> && apt-get autoremove -y && apt-get clean \<br /> && rm -rf /root/ \<br /> && rm -rf /var/lib/apt/lists/*<br /><br /># Define mountable directory<br />VOLUME ["/data"]<br /><br /># Expose ports<br />EXPOSE 514 5043 5000 8081 8202/udp 9292<br /><br />ENV PATH=/usr/share/logstash/bin:$PATH<br />ENV LS_JAVA_OPTS="-Xms1g -Xmx10g"<br /><br />CMD ["logstash","-f","/etc/logstash"]</span></span></blockquote>
Kibana can be built either way, with IBM Java or OpenJDK 9. Again, here is the Dockerfile:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">FROM ibmjava:8-sdk<br />WORKDIR "/root"<br />ENV PATH=/usr/share/node-v6.9.1/bin:/usr/share/kibana/bin:$PATH<br /><br /># Install the dependencies and NodeJS<br />RUN apt-get update && apt-get install -y \<br /> apache2 g++ gcc git make nodejs python unzip wget tar \<br /> && wget https://nodejs.org/dist/v6.9.1/node-v6.9.1-linux-s390x.tar.gz \<br /> && tar xvzf node-v6.9.1-linux-s390x.tar.gz \<br /> && mv /root/node-v6.9.1-linux-s390x/ /usr/share/node-v6.9.1 \<br /># Download and setup Kibana<br /> && cd /root/ \<br /><span style="font-family: "courier new" , "courier" , monospace;"> </span>&& wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.2-linux-x86_64.tar.gz \<br /> && tar xvf kibana-5.5.2-linux-x86_64.tar.gz \<br /> && mv /root/kibana-5.5.2-linux-x86_64 kibana-5.5.2 \<br /> && cd /root/kibana-5.5.2 \<br /> && mv node node_old \<br /> && ln -s /usr/share/node-v6.9.1/bin/node node \<br /> && mkdir /etc/kibana \<br /> && cp config/kibana.yml /etc/kibana \<br /> && mv /root/kibana-5.5.2/ /usr/share/kibana \<br /># Cleanup cache data, unused packages and source files<br /> && apt-get remove -y git make unzip wget \<br /> && apt-get autoremove -y && apt-get clean \<br /> && rm -rf /root/kibana-5.5.2-linux-x86_64.tar.gz /root/node-v6.9.1-linux-s390x.tar.gz \<br /> && rm -rf /var/lib/apt/lists/*<br /><br /># Expose 5601 port used by Kibana<br /># Expose 80 port used by apache<br />EXPOSE 5601 80<br /><br /># Start Kibana service<br />CMD ["kibana","-H","0.0.0.0"]</span></span></blockquote>
To build these containers, put each <i>Dockerfile</i> into a separate directory (add the <i>gradle.diff</i> patch into the Elasticsearch directory) and start the builds using<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">docker build -t <image-name> <directory-name></span></blockquote>
Create <span style="font-family: "courier new" , "courier" , monospace;">kibana.yml</span> containing:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">elasticsearch.url=http://elasticsearch:9200/</span></span></blockquote>
And to convince elasticsearch, that you are running with just once instance and that is ok, create <span style="font-family: "courier new" , "courier" , monospace;">elasticsearch.yml</span>, containing:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">cluster.name: my-cluster<br />path.data: /data<br />http.host: 0.0.0.0<br />discovery.zen.minimum_master_nodes: 1</span></span></blockquote>
Finally, starting each container with the right configuration is all you need to do. A quick hack is something like this:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">docker run --name elasticsearch -v $PWD/elasticsearch-data:/data -v $PWD/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -p 9200:9200 -p 9300:9300 -d elasticsearch:5.5.2</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">docker run --name logstash --link elasticsearch:elasticsearch -v $PWD/ELK:/etc/logstash -p 514:514 -p 5043:5043 -p 8081:8081 -p 8202:8202/udp -p 9292:9292 -d logstash:5.5.2</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">docker run --name kibana -v $PWD/kibana.yml:/usr/share/kibana/config/kibana.yml -p 5601:5601 -d kibana:5.5.2</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"></span></span></blockquote>
Make sure you replace the image names with the ones you used during the build.<br />
<br />
Alternatively, a compose-file is a good way to build and start things up (instead of <i>docker build</i> and <i>docker run</i>). Make sure you have the Dockerfiles (plus the diff for elasticsearch) in the directories E/, L/ and K/. Then create this docker-compose.yml file:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">version: '2'<br />services:<br /> elasticsearch:<br /> build: ./E<br /> volumes:<br /> - ./elasticsearch-data:/data<br /> - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml<br /> ports:<br /> - "9200:9200"<br /> - "9300:9300"<br /> networks:<br /> - elk<br /><br /> logstash:<br /> build: ./L<br /> volumes:<br /> - ./ELK:/etc/logstash<br /> ports:<br /> - "514:514"<br /> - "5043:5043"<br /> - "8081:8081"<br /> - "8202:8202/udp"<br /> - "9292:9292"<br /> networks:<br /> - elk<br /> depends_on:<br /> - elasticsearch<br /><br /> kibana:<br /> build: ./K<br /> volumes:<br /> - ./kibana.yml:/usr/share/kibana/config/kibana.yml<br /> ports:<br /> - "5601:5601"<br /> networks:<br /> - elk<br /> depends_on:<br /> - elasticsearch<br /><br />networks:<br /> elk:<br /> driver: bridge</span></blockquote>
A docker-compose up will build the images, if necessary, and start them (<i>this has been updated 2017/09/19</i>).<br />
<br />
The ELK directory referenced during the start of Logstash contains the Logstash configuration and is mapped into that container. All files of this directory are simply concatenated and used as configuration by Logstash. This allows to specify input and output parameters of Logstash, as well as log entry parsing on the way.For instance, put a file in that directory with this content:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">input {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> syslog {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> port => 514</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> type => "docker"</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> }</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">}</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">filter {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">}</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">output {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> elasticsearch {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> hosts => "elasticsearch:9200"</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> }</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">}</span></span></blockquote>
and you will be able to receive logs (assuming port 514 is exposed when starting the container. To use this logging infrastructure, starting containers (on any host) just needs to be done adding "<span style="font-family: "courier new" , "courier" , monospace;">--log-driver=syslog --log-opt syslog-address=tcp://logstash-hostname:514</span>" to the "<span style="font-family: "courier new" , "courier" , monospace;">docker run</span>" parameters. Alternatively, it can be set up permanently for the docker daemon. This will put all log messages into the Elastic stack for further processing, and it uses the syslog protocol and docker log driver.<br />
<br />
An alternative is the gelf format ("<span class="st">Graylog Extended Format"). This approach provides more metadata to log messages as understood by Logstash. A Logstash configuration could look like this:</span><br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">input {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> gelf {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> port => 8202</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> }</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">}</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">filter {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">}</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">output {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> elasticsearch {</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> hosts => "elasticsearch:9200"</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;"> }</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-size: x-small;">}</span></span></blockquote>
Again, starting containers will render their output in ELK, e.g. in "<span style="font-family: "courier new" , "courier" , monospace;">docker run -tid --log-driver=gelf --log-opt gelf-address=udp://logstash-hostname:8202 ubuntu bash</span>".<br />
<br />
Once the three E/L/K containers are started, point your browser to port 5601 of the (Logstash) host to work with log entries and create your individual visualizations and dashboards:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHN-WadsfEJQpESMMWyuGCQiyxuHYx4ukPVDKvfVZ5a4EGRF_gQvAlbxj5NweGhtsIyO_aw_-45gX5IhY7qez4ezzTRD1UrHA_pXlGEFm4yqFR6_oJ3bumRhZx5bwRvtv0kKke3L4HmAY/s1600/elk-discover.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="868" data-original-width="1600" height="346" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHN-WadsfEJQpESMMWyuGCQiyxuHYx4ukPVDKvfVZ5a4EGRF_gQvAlbxj5NweGhtsIyO_aw_-45gX5IhY7qez4ezzTRD1UrHA_pXlGEFm4yqFR6_oJ3bumRhZx5bwRvtv0kKke3L4HmAY/s640/elk-discover.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizS2NLkucdNXQQ64w9O9Qltl2S27ekkSP36OzrJwuc_g85hhNTIuKratMf2xEH_GSP1i0LPSvr5e2tnZDh0Kap_5xiuNPLsFinfR-qRn3Qd9ZDRR0prrtKbzEIZZqSFAsOBSm6klbgh58/s1600/elk-dashboard.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="868" data-original-width="1600" height="345" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizS2NLkucdNXQQ64w9O9Qltl2S27ekkSP36OzrJwuc_g85hhNTIuKratMf2xEH_GSP1i0LPSvr5e2tnZDh0Kap_5xiuNPLsFinfR-qRn3Qd9ZDRR0prrtKbzEIZZqSFAsOBSm6klbgh58/s640/elk-dashboard.png" width="640" /></a></div>
Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-27680232456539903612017-09-13T22:12:00.001+02:002017-09-13T22:12:07.882+02:00Docker Official Images Go Multi-ArchStarting today, all Docker <a href="https://docs.docker.com/docker-hub/official_repos/" target="_blank">official images</a> (<a href="https://hub.docker.com/explore/" target="_blank">on Docker Hub</a> and soon easily identifiable <a href="https://store.docker.com/" target="_blank">on Docker Store</a>) are multi-arch images. Official images are credibly curated images that are maintained by Docker, the Docker community or the projects behind individual images.<br />
<br />
That does not mean that all these images are available for s390x yet, but the infrastructure is in place to several architectures into official images, i.e. all official images are manifest lists. An <a href="http://containerz.blogspot.com/2016/07/multi-arch-registry.html" target="_blank">earlier post</a> explained multi-arch images and how to use it.<br />
<br />
At this time, quite some official images are not just multi-arch enabled, but also carry s390x binaries.<br />
<br />
<a name='more'></a><br />
This is obviously a huge step forward for usability on s390x: you can take an image like e.g. postgres and just use it, regardless of whether you are on x86, s390x, or ppc64le. Sticking with that example, everything from the image's anchor page <a href="https://hub.docker.com/_/postgres/">https://hub.docker.com/_/postgres/</a> simply works. Also, having Z as part of the official images does not require you to change <span style="font-family: "Courier New",Courier,monospace;">Dockerfile</span>s on Z.<br />
<br />
The way these official images are built is via <a href="https://github.com/docker-library/official-images">https://github.com/docker-library/official-images</a>
. So cloning this repo and grepping for s390x gives a first overview of
which images are also provided for s390x. I expect this soon to be very
visible on Docker Store -- they have got an architecture checkbox for
IBM Z already.<br />
<br />
At the time of writing, this is the list of official images that are enabled for s390x:<br />
<ul>
<li>buildpack-deps</li>
<li>busybox</li>
<li>clojure</li>
<li>debian</li>
<li>drupal</li>
<li>erlang</li>
<li>gcc</li>
<li>ghost </li>
<li>golang</li>
<li>haproxy</li>
<li>hello-seattle</li>
<li>hello-world</li>
<li>hola-mundo</li>
<li>httpd</li>
<li>hylang</li>
<li>ibmjava</li>
<li>irssi</li>
<li>memcached</li>
<li>nextcloud</li>
<li>nginx</li>
<li>node</li>
<li>openjdk</li>
<li>owncloud</li>
<li>php</li>
<li>postgres</li>
<li>python</li>
<li>rabbitmq</li>
<li>redis</li>
<li>redmine</li>
<li>ruby</li>
<li>spiped</li>
<li>tomcat</li>
<li>tomee</li>
<li>ubuntu</li>
<li>websphere-liberty</li>
<li>wordpress</li>
</ul>
<br />
Kudos to Tianon Gravi and friends like yosifkit at Infosiftr for enabling the entire official image structure, the Docker team for the Hub/Store an general infrastructure work, and Phil Estes (IBM) and team/community working on multi-arch support.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-83418617365084472202017-08-16T20:47:00.000+02:002017-08-16T20:47:38.781+02:00Book about Docker on ZIBM has published a very nice document about Docker on the mainframe.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinsg3YUy5hX4GTP9V5JInnxrg4PI4YoIVbbXzIsLw-i-EOBq4glf7rja13opluseosfc-THOgm2aK2GK1tUvf2uX7SVnEe45PK71PZeFjD6XWYTf1ObB2NvGyjdz38L6HbrFVOQRsNf3I/s1600/docker-book-title-page.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1073" data-original-width="837" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinsg3YUy5hX4GTP9V5JInnxrg4PI4YoIVbbXzIsLw-i-EOBq4glf7rja13opluseosfc-THOgm2aK2GK1tUvf2uX7SVnEe45PK71PZeFjD6XWYTf1ObB2NvGyjdz38L6HbrFVOQRsNf3I/s320/docker-book-title-page.png" width="249" /></a></div>
It is available as <a href="http://public.dhe.ibm.com/software/dw/linux390/docu/l177vd00.pdf" target="_blank">PDF download</a> as well as via <a href="https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.ldvd/ldvd_c_welcome.html" target="_blank">Knowledge Center</a>. You can read one of the best container/Docker introductions I've seen on the Internet, as well as a good coverage of basic concepts around containers and Docker in the context of Z.<br />
Chapter overiew:<br />
<br />
<a name='more'></a><br />
1. Docker basics<br />
2. Components in a Docker environment<br />
3. Planning for Docker<br />
4. Managing images<br />
5. Security<br />
6. Avoiding common pitfalls<br />
<br />
Here is the table of contents:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbBFhAF8amY25J1uz8d2pKfwkbB5hw5KIDKWaR2zROMRGJ9LUIIkhwhXfJFshq7NjNg2jEUvBHCOBVTJ9XeftAB35qKae3Jfp52uZOVmsdH7Ipkb2oZtn8kgv49UAYpRdt6oWePIW1u9U/s1600/docker-book-toc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1100" data-original-width="1296" height="540" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbBFhAF8amY25J1uz8d2pKfwkbB5hw5KIDKWaR2zROMRGJ9LUIIkhwhXfJFshq7NjNg2jEUvBHCOBVTJ9XeftAB35qKae3Jfp52uZOVmsdH7Ipkb2oZtn8kgv49UAYpRdt6oWePIW1u9U/s640/docker-book-toc.png" width="640" /></a></div>
<br />
<a href="https://www.ibm.com/support/knowledgecenter/" target="_blank">IBM Knowledge Center</a> (which carries most of IBM's documentation) keeps all this searchable and always provides the most recent informatoin.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-19722735300407907562017-08-16T15:57:00.004+02:002017-08-17T22:28:06.984+02:00Docker Enterprise Edition available for IBM ZIt has been signaled <a href="http://containerz.blogspot.com/2017/06/docker-in-datacenter.html" target="_blank">earlier this year</a>, now <a href="https://blog.docker.com/2017/08/docker-enterprise-edition-17-06/" target="_blank">it has been announced: Docker Enterprise Edition is available for Z</a>. Version 17.06.1 has now been made available, and not only is it a new version for x86, but it also supports Z as "managed-to" platform. A lot of work went into this, and this is what the journey starts with:<br />
<ul>
<li>the engine of Docker Enterprise Edition will run on Linux on z Systems</li>
<li>Docker Trusted Registry (DTR) can handle s390x images (while DTR runs on x86)</li>
<li>Docker Universal Control Plane (UCP) can manage s390x nodes (while the UCP UI runs on x86)</li>
</ul>
So what exactly do these components provide? Here are more details (and some screenshots that may show the function set much better):<br />
<a name='more'></a>The <u><i><b>engine of Docker Enterprise Edition</b></i></u> is a commercially supported by Docker, Inc for the enterprise distros of the Linux distribution partners we closely work with. <a href="http://containerz.blogspot.com/2017/06/new-naming-scheme-for-docker-releases.html" target="_blank">A previous blog post</a> has discussed support timelines of CE and EE. This adds Docker, Inc. as another support provider when for running a container environment on IBM Z.<br />
After installation ("install as you would on x86"), it's a plain Docker engine that runs on Z.<br />
<br />
<i><u><b>Docker Trusted Registry</b></u></i> provides a commercially supported registry environment. It offers features like a web user interface to inspect and manage repositories in a registry server. Access can be controlled and organized in teams. Note that security scanning does not know about s390x vulnerabilities at this time. Here are a few screenshots showing functionality:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0tKSADr8BaB6LVpoOG_6zQcyrdrZoS0gWgLd1o-h9jITYkgmMsGh2gKWjbMu0cjkTfTKv9pGFW3R2DVg1_FtK7gq1xkMJJ5G-9G2j_r54ct7OwLV3lGeU-n8GU-XOuRHnGZZJQvxHIZw/s1600/dtr22-repo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0tKSADr8BaB6LVpoOG_6zQcyrdrZoS0gWgLd1o-h9jITYkgmMsGh2gKWjbMu0cjkTfTKv9pGFW3R2DVg1_FtK7gq1xkMJJ5G-9G2j_r54ct7OwLV3lGeU-n8GU-XOuRHnGZZJQvxHIZw/s640/dtr22-repo.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjN1piZf5uF2GTrtmythZW85BQyh5BZXvYWhid-hVxzvK4GVJfDAhyThodTldjnb9K9L8gaWzgpKHnc9QzxJYaHZkBuzW_1TlwJ5eZU8BiJDi_wzp9OuO99ddevdiR8nvE3wM9HW-I0Rvs/s1600/dtr22-webhook.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjN1piZf5uF2GTrtmythZW85BQyh5BZXvYWhid-hVxzvK4GVJfDAhyThodTldjnb9K9L8gaWzgpKHnc9QzxJYaHZkBuzW_1TlwJ5eZU8BiJDi_wzp9OuO99ddevdiR8nvE3wM9HW-I0Rvs/s640/dtr22-webhook.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtLxybdOb4LPUKsn7Z3wRPVvqlW-2MuXGQqPGg48akTAhEc1yjzLw3UYYMsPt8TGAiy54fXdXKU-p1IscqiuOlEn5e551MDwX3zyP_KaBpXywqRMqPdXmp2YRW019L0yfWt-D-cAL30Vo/s1600/dtr22-users.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtLxybdOb4LPUKsn7Z3wRPVvqlW-2MuXGQqPGg48akTAhEc1yjzLw3UYYMsPt8TGAiy54fXdXKU-p1IscqiuOlEn5e551MDwX3zyP_KaBpXywqRMqPdXmp2YRW019L0yfWt-D-cAL30Vo/s640/dtr22-users.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiijODcgazR268bSmceg04y2WbJBzE79sIeHYnxMo3KsxpF73rgADnU_kF5FHSekS0xvrWf8Ay8P0wmpM62flpSHXlgL3ce7ZJMJScarrYpedhsXNF5-8Xm1KblxbwVE7dZE1wVPy4o9Ug/s1600/dtr22-gc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiijODcgazR268bSmceg04y2WbJBzE79sIeHYnxMo3KsxpF73rgADnU_kF5FHSekS0xvrWf8Ay8P0wmpM62flpSHXlgL3ce7ZJMJScarrYpedhsXNF5-8Xm1KblxbwVE7dZE1wVPy4o9Ug/s640/dtr22-gc.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
Installation is not special -- simply push s390x images to use this with Z.<br />
<br />
<i><u><b>Docker Universal Control Plane</b></u></i> is a management tool for working in a Docker swarm cluster. It handles s390x nodes in a swarm, and operates the swarm: it allows to manage the life cycle of containers and services in the cluster and can deal with associated resources such as networks, volumes and secrets. Here are a few screenshots showing the look and feel:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjohyOeCF6xLwlloV5FiyEBeKa8bZBeRPN8XVq7so1oQE3qqHAbM-FhRgHeW1Aa5lmuW4U6SVjaB5_JpBObdfXKsaBg7glmov-dHmcsmCyXtRFhyAX9dryoidb1R8Nv4m-tZ4dnaCcxFqE/s1600/ucp23-nodes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjohyOeCF6xLwlloV5FiyEBeKa8bZBeRPN8XVq7so1oQE3qqHAbM-FhRgHeW1Aa5lmuW4U6SVjaB5_JpBObdfXKsaBg7glmov-dHmcsmCyXtRFhyAX9dryoidb1R8Nv4m-tZ4dnaCcxFqE/s640/ucp23-nodes.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_YGTQ5b8yNaqobO1zP4LYhvrIO4mq67lsS1qI-FqLao0UVOfESg_V98Uymvjceb6Eh6a1uQmln3otiKuLklfwnGhBqCqKdk0MffcK759bGJqbpHpdWHuE-L0W9-tipUeR1T3O4mDwraE/s1600/ucp23-services.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_YGTQ5b8yNaqobO1zP4LYhvrIO4mq67lsS1qI-FqLao0UVOfESg_V98Uymvjceb6Eh6a1uQmln3otiKuLklfwnGhBqCqKdk0MffcK759bGJqbpHpdWHuE-L0W9-tipUeR1T3O4mDwraE/s640/ucp23-services.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH96A6HNhKjBxJJT4FxzlBHigpYupMEb-EzQZSSBdvAXTUgUegK-4gDXkEvuadyCAuXInA26qR4UfjDqtWmLfTPRhfGUucUbDhUELt1wxO4-Ojr9cr9y_ZFdcphM9Zg5uFq2qApgTVwTg/s1600/ucp23-containers.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH96A6HNhKjBxJJT4FxzlBHigpYupMEb-EzQZSSBdvAXTUgUegK-4gDXkEvuadyCAuXInA26qR4UfjDqtWmLfTPRhfGUucUbDhUELt1wxO4-Ojr9cr9y_ZFdcphM9Zg5uFq2qApgTVwTg/s640/ucp23-containers.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxjfURu-LYoB31aXdV5QNEaK9G3fhFobW2wXiaoojkPEi3cyRMRbcwCME9TjCSpzyuxf-W-hjW2eJ7mN57g3wdynkevkINb2z4IFv4sGeDjeTsdNSIiBQx0_VcfoSg-eR9W5XLpNzUwBc/s1600/ucp23-users.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxjfURu-LYoB31aXdV5QNEaK9G3fhFobW2wXiaoojkPEi3cyRMRbcwCME9TjCSpzyuxf-W-hjW2eJ7mN57g3wdynkevkINb2z4IFv4sGeDjeTsdNSIiBQx0_VcfoSg-eR9W5XLpNzUwBc/s640/ucp23-users.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihYBNmOwwO79PYFGg07js_NLCcvJfbBb493Fy_LAeWpNITQtVShMV5EVe4DHDk8emPuKi0QAE_u21W06DbIXSC1yPAZ4xc30wXJmRG5-mvJCBPghMrV9yT6xl3JAJK5VlAg9sG0fhpuyY/s1600/ucp23-admin.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="917" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihYBNmOwwO79PYFGg07js_NLCcvJfbBb493Fy_LAeWpNITQtVShMV5EVe4DHDk8emPuKi0QAE_u21W06DbIXSC1yPAZ4xc30wXJmRG5-mvJCBPghMrV9yT6xl3JAJK5VlAg9sG0fhpuyY/s640/ucp23-admin.png" width="640" /></a></div>
<br />
To use with s390x nodes, simply add these nodes in the "Nodes" mask -- it is a simple <span style="font-family: "courier new" , "courier" , monospace;">docker swarm join</span> command on z nodes, run against the swarm manager running on x86.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Disclaimer: beta versions have been used for the screen shots, this could slightly differ from the GA versions.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-14851837773728225932017-07-24T23:55:00.004+02:002017-07-24T23:55:57.756+02:00Registry Option: SUSE PortusAn Open Source alternative to Docker Trusted Registry is <a href="http://port.us.org/" target="_blank">Portus</a> from SUSE. This is a front end to a private Open Source registry that allows for fine grained control of registry access and content: it can manage users, teams, and namespaces (no, not the kernel ones). It can integrate with LDAP for authentication and offers an audit trail, and can be extended for security scanning.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij1CC5PCboIoswQupqHm0_9SUH-8EjyL4-KmlaHAgbx7Hlq2A112M5knnMpQ0hn9QfoEpD0C0qRwZuBVVAwGf19SVs57pEsAupYyddUBltY6eNA0wvhvvm_Fzw4wctVCTmO3v6ELni7qU/s1600/portus-dashboard.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="874" data-original-width="1442" height="387" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij1CC5PCboIoswQupqHm0_9SUH-8EjyL4-KmlaHAgbx7Hlq2A112M5knnMpQ0hn9QfoEpD0C0qRwZuBVVAwGf19SVs57pEsAupYyddUBltY6eNA0wvhvvm_Fzw4wctVCTmO3v6ELni7qU/s640/portus-dashboard.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Portus dashboard with activity log</td></tr>
</tbody></table>
To play with Portus, we need:<br />
<ol>
<li>docker-compose.</li>
<li>a private registry</li>
<li>Portus</li>
</ol>
<br />
<a name='more'></a><br />
<h4>
docker-compose </h4>
There are various way to get to <span style="font-family: "courier new" , "courier" , monospace;">docker-compose</span>. If it is available in your distribution, simply install the package. If not, it can be installed through <i>python</i>'s <i>pip</i>. That can be part of your distribution (search for <span style="font-family: "courier new" , "courier" , monospace;">python-pip</span>), or it is simply installed (make sure you have python installed) through:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">wget https://bootstrap.pypa.io/get-pip.py<br />python get-pip.py</span></blockquote>
Then docker-compose can be installed with<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">pip install docker-compose</span></blockquote>
<h4>
A private registry</h4>
Check out <a href="http://containerz.blogspot.com/2016/07/a-private-registry-building-and-using.html">http://containerz.blogspot.com/2016/07/a-private-registry-building-and-using.html</a> to build and run a private registry.<br />
Note: Portus' latest release v2.3 uses version 2.3.1 of the open source registry. To build this specific version, simply check out version 2.3.1 instead of 2.4.1 as shown in the example.<br />
To quickly start playing with Portus, it may be sufficient to not use certificates as shown in the registry post. In that case, make sure you add<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">"insecure-registries":["your-host-name:5000"]</span></blockquote>
to <i>/etc/docker/daemon.json</i>. However, always use certificates when considering a more serious environment, leave alone production!<br />
<h4>
Portus</h4>
Run<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">git clone https://github.com/SUSE/Portus.git<br />cd Portus/<br />git checkout v2.3</span></blockquote>
As long as the set of official images does not consider s390x, we'll have to do small changes to make it run smooth:<br />
and make changes like shown in this patch:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">diff --git a/Dockerfile b/Dockerfile<br />index 4460ff1..160d580 100644<br />--- a/Dockerfile<br />+++ b/Dockerfile<br />@@ -1,4 +1,4 @@<br /><span style="color: red;">-FROM library/ruby:2.3.1</span><br /><span style="color: lime;">+FROM s390x/ruby:2.3</span><br /> MAINTAINER Flavio Castelli <fcastelli@suse.com><br /><br /> ENV COMPOSE=1<br />@@ -7,7 +7,8 @@ EXPOSE 3000<br /> WORKDIR /portus<br /> COPY Gemfile* ./<br /> RUN bundle install --retry=3 && bundle binstubs phantomjs<br /><span style="color: red;">-RUN apt-get update && \</span><br /><span style="color: lime;">+RUN echo deb http://ftp.de.debian.org/debian stretch main >> /etc/apt/sources.list && \<br />+ apt-get update && \</span><br /> apt-get install -y --no-install-recommends nodejs<br /><br /> ADD . .<br />diff --git a/docker-compose.yml b/docker-compose.yml<br />index 872d117..a4af7e4 100644<br />--- a/docker-compose.yml<br />+++ b/docker-compose.yml<br />@@ -28,12 +28,12 @@ services:<br /> - db<br /><br /> db:<br /><span style="color: red;">- image: library/mariadb:10.0.23</span><br /><span style="color: lime;">+ image: sinenomine/mariadb-s390x</span><br /> environment:<br /> MYSQL_ROOT_PASSWORD: portus<br /><br /> registry:<br /><span style="color: red;">- image: library/registry:2.3.1</span><br /><span style="color: lime;">+ image: distribution:2.3.1</span><br /> environment:<br /> - REGISTRY_AUTH_TOKEN_REALM=http://${EXTERNAL_IP}:3000/v2/token<br /> - REGISTRY_AUTH_TOKEN_SERVICE=${EXTERNAL_IP}:${REGISTRY_PORT}</span></blockquote>
Then simply run<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">./compose-setup.sh -e your-host-name</span></blockquote>
which will start all the components up. You will then be able to browse to <i>http://your-host-name:3000/</i> and create the administrator login. The admin can then create additional users, assign them to teams, and define namespaces (prefix of repositories) that belong to these teams.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEIB7QyONZkq3hAvo7jtb7W8PmY1tY3JapD9c06gVLuEPjSR6xfDsq29XFcT8wl9wZvgteP0N4Pvc06fT7BxmiqaGI6HU9TLjPmJ1oD4rmUCdFrjCJj7ZGSQYGX64j_YTbhmE2viMq1d4/s1600/portus-team.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="874" data-original-width="1442" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEIB7QyONZkq3hAvo7jtb7W8PmY1tY3JapD9c06gVLuEPjSR6xfDsq29XFcT8wl9wZvgteP0N4Pvc06fT7BxmiqaGI6HU9TLjPmJ1oD4rmUCdFrjCJj7ZGSQYGX64j_YTbhmE2viMq1d4/s640/portus-team.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Defining a team</td></tr>
</tbody></table>
Eventually images can be uploaded by authorized users into these namespaces.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCv81BDB3XGfGt5KCOtAP5Z6LOIy4eHzesR_qp6B-9vHhw-vbso-ORRgH8hSfzEQwFAZkMlsm0veBczOqDY09oQ8ebWnPecNnjKBwzQPOoXlgbSRKGC29fIxF-9Zlbz8xFhJ1bA36554M/s1600/portus-namespace.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="874" data-original-width="1442" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCv81BDB3XGfGt5KCOtAP5Z6LOIy4eHzesR_qp6B-9vHhw-vbso-ORRgH8hSfzEQwFAZkMlsm0veBczOqDY09oQ8ebWnPecNnjKBwzQPOoXlgbSRKGC29fIxF-9Zlbz8xFhJ1bA36554M/s640/portus-namespace.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Showing namespace details</td></tr>
</tbody></table>
To explore all the capabilities and advanced setup possibilities, explore <a href="http://port.us.org/documentation.html">http://port.us.org/documentation.html</a>.<br />
<br />
PS: If you want to play with the latest version of Portus, the tweaks for s390x are (at the time of writing) slightly different:<br />
create <i>yarn/Dockerfile</i> containing:<br />
<blockquote class="tr_bq">
<span style="font-size: xx-small;"><span style="font-family: "courier new" , "courier" , monospace;">FROM s390x/debian:sid</span><br /><span style="font-family: "courier new" , "courier" , monospace;">RUN apt-get update && apt-get install -y curl apt-transport-https \</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> nodejs npm nodejs-legacy && \</span><br /><span style="font-family: "courier new" , "courier" , monospace;"> npm install --global yarn</span><br /><span style="font-family: "courier new" , "courier" , monospace;">WORKDIR /workspace</span></span></blockquote>
Then build with "<span style="font-family: "courier new" , "courier" , monospace;">docker build -t yarn yarn</span>".<br />
The changes in the Portus tree are (note <i>.env</i> can contain <i>your-host-name</i>):<br />
<blockquote class="tr_bq">
<span style="font-size: xx-small;"><span style="font-family: "courier new" , "courier" , monospace;">diff --git a/.env b/.env</span><br /><span style="font-family: "courier new" , "courier" , monospace;">index e18af26..a5667ef 100644</span><br /><span style="font-family: "courier new" , "courier" , monospace;">--- a/.env</span><br /><span style="font-family: "courier new" , "courier" , monospace;">+++ b/.env</span><br /><span style="font-family: "courier new" , "courier" , monospace;">@@ -1,2 +1,2 @@</span><br /><span style="font-family: "courier new" , "courier" , monospace;">-MACHINE_FQDN=172.17.0.1</span></span><span style="font-size: xx-small;"><span style="font-family: "courier new" , "courier" , monospace;"><br />+MACHINE_FQDN=s38lp23.boeblingen.de.ibm.com<br /> REGISTRY_PORT=5000<br />diff --git a/Dockerfile b/Dockerfile<br />index a64344e..7f8efe0 100644<br />--- a/Dockerfile<br />+++ b/Dockerfile<br />@@ -1,4 +1,4 @@<br />-FROM library/ruby:2.3.1<br />+FROM s390x/ruby:2.3<br /> MAINTAINER Flavio Castelli <fcastelli@suse.com><br /><br /> ENV COMPOSE=1<br />@@ -7,7 +7,8 @@ EXPOSE 3000<br /> WORKDIR /srv/Portus<br /> COPY Gemfile* ./<br /> RUN bundle install --retry=3<br />-RUN apt-get update && \<br />+RUN echo deb http://ftp.de.debian.org/debian stretch main >> /etc/apt/sources.list && \<br />+ apt-get update && \<br /> apt-get install -y --no-install-recommends nodejs<br /><br /> ADD . .<br />diff --git a/docker-compose.yml b/docker-compose.yml<br />index f34dfd6..fa470c6 100644<br />--- a/docker-compose.yml<br />+++ b/docker-compose.yml<br />@@ -3,7 +3,7 @@ version: '2'<br /> services:<br /> portus:<br /> build: .<br />- image: opensuse/portus:development<br />+ image: portus<br /> command: bash /srv/Portus/examples/development/compose/init<br /> environment:<br /> - PORTUS_MACHINE_FQDN_VALUE=${MACHINE_FQDN}<br />@@ -21,7 +21,7 @@ services:<br /> - .:/srv/Portus<br /><br /> crono:<br />- image: opensuse/portus:development<br />+ image: portus<br /> command: ./bin/crono<br /> depends_on:<br /> - portus<br />@@ -36,19 +36,19 @@ services:<br /> - db<br /><br /> webpack:<br />- image: kkarczmarczyk/node-yarn:6.9-slim<br />+ image: yarn<br /> command: bash /srv/Portus/examples/development/compose/bootstrap-webpack<br /> working_dir: /srv/Portus<br /> volumes:<br /> - .:/srv/Portus<br /><br /> db:<br />- image: library/mariadb:10.0.23<br />+ image: sinenomine/mariadb-s390x<br /> environment:<br /> MYSQL_DATABASE: portus_production<br /><br />@@ -62,7 +62,7 @@ services:<br /> - /var/lib/portus/mariadb:/var/lib/mysql<br /><br /> registry:<br />- image: library/registry:2.6<br />+ image: distribution:2.4.1<br /> environment:<br /> # Authentication<br /> REGISTRY_AUTH_TOKEN_REALM: http://${MACHINE_FQDN}:3000/v2/token</span></span></blockquote>
<blockquote>
</blockquote>
Build with "<span style="font-family: "courier new" , "courier" , monospace;">docker build -t portus Portus</span>" and start with "<span style="font-family: "courier new" , "courier" , monospace;">docker-compose up</span>".Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-40101708749473344112017-07-24T21:16:00.000+02:002017-07-24T21:16:23.515+02:00New Docker Engine in SLES 12 Containers ModuleA couple of days ago, SUSE published a major update to the Docker engine. The <i>Containers Module</i> now offers version <i>17.04-CE</i> of the engine (<span style="font-family: "Courier New",Courier,monospace;">docker-17.04.0_ce-98.2.s390x.rpm</span>).Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-14342737191859263682017-07-07T17:11:00.000+02:002018-05-17T13:41:00.741+02:00An Overview on s390x Base ImagesFor s390x, quite a few options for base images are available on <a href="https://hub.docker.com/" target="_blank">Docker Hub</a>/<a href="https://store.docker.com/" target="_blank">Docker Store</a> these days. They vary from enterprise environments over community distributions to minimal images. This post gives an overview on what is provided by various sources.<br />
<a name='more'></a><br />
This list puts enterprise distribution options next to their community flavours, and is in alphabetical order:<br />
<ul>
<li><a href="https://alpinelinux.org/" target="_blank">Alpine Linux</a>, a minimal Linux distro (base image is 5MB in size): <a href="https://hub.docker.com/_/alpine/" target="_blank">alpine</a></li>
<li><a href="http://www.sinenomine.net/products/linux/clefos" target="_blank">ClefOS</a>, a build of <a href="https://www.centos.org/" target="_blank">CentOS</a> for s390x -- community version of Red Hat's RHEL: <a href="https://hub.docker.com/_/clefos/" target="_blank">clefos</a> -- comparable and compatible to RHEL 7</li>
<li><a href="http://www.debian.org/" target="_blank">Debian</a> is one of the major free Linux distributions: <a href="https://hub.docker.com/_/debian/" target="_blank">debian</a></li>
<li><a href="https://getfedora.org/">Fedora</a>: a community distribution and incubator for features towards Red Hat Enterprise Linux: <a href="https://hub.docker.com/_/fedora/">fedora</a></li>
<li><a href="https://www.opensuse.org/" target="_blank">openSUSE</a>: <a href="https://hub.docker.com/_/opensuse/" target="_blank">opensuse</a>. For s390x, use the tumbleweed tag ("opensuse:tumbleweed") </li>
<li><a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux" target="_blank">Red Hat Enterprise Linux</a> is not provided on Docker Hub/Store, so RHEL users need to create their own images at this time, e.g. as <a href="http://containerz.blogspot.com/2015/03/creating-base-images.html" target="_blank">described here</a></li>
<li><a href="https://www.suse.com/products/systemz/" target="_blank">SUSE Linux Enterprise Server</a> image are also not available not Docker Hub/Store. SLES users have two options:</li>
<ul>
<li>SUSE's containers module (which also provides the Docker engine) offers RPMs to create SLES 11 and SLES 12 base images: use <i>sle2docker </i>and <i>sles*-docker-image</i> RPMs which are provided in the containers module; see <a href="https://www.suse.com/documentation/sles-12/singlehtml/book_sles_docker/book_sles_docker.html" target="_blank">SUSE's documentation for more information</a>.</li>
<li>a <a href="http://containerz.blogspot.de/2015/03/creating-base-images.html" target="_blank">script like this</a> creates a base image based on the SLES host image</li>
</ul>
<li>Canonical's <a href="https://www.ubuntu.com/" target="_blank">Ubuntu</a> is based on Debian: <a href="https://hub.docker.com/_/ubuntu/" target="_blank">ubuntu</a></li>
</ul>
<ul><ul>
</ul>
</ul>
All the official images (not prefixed) are following the upstream work, so will always be maintained and up to date.<br />
<br />
Note: in this list, the images' hyperlinks point to Docker Hub, which I currently find more convenient than Docker Store -- although Docker Store will probably be the future.<br />
<br />
Update 2018/3/29: replaced all the links with multi-arch images that work on s390x as well as on most other platforms.<br />
Update 2018/5/17: fedora:latest is now backed by s390x contentUtz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com3tag:blogger.com,1999:blog-1252230304278490810.post-31237844070295976522017-06-29T22:40:00.002+02:002017-10-12T08:41:00.933+02:00Docker CE for all distributionsThe previous post mentioned that <a href="http://containerz.blogspot.com/2017/06/first-ce-for-s390x-by-docker.html" target="_blank">Docker provides CE packages for Ubuntu</a>. For users of Debian, SLES, RHEL, Fedora, ClefOS, openSUSE, and Alpine, there is still an option to get the latest Docker CE version to their environment:<br />
<a name='more'></a><a href="https://download.docker.com/linux/static/stable/s390x/">https://download.docker.com/linux/static/stable/s390x/</a> contains static builds of docker. Downloading the tarball, extracting the binaries into /usr/lib and starting them manually is sufficient for getting it up quickly. Docker <a href="https://docs.docker.com/engine/installation/linux/docker-ce/binaries/#install-static-binaries" target="_blank">documents these steps on their webpages</a>.<br />
<br />
Statically built binaries do not depend on any libraries on the host system. That allows to build executables that will work essentially on any Linux flavour with a decent kernel -- which is the case for the current versions of the major distributions.<br />
<br />
To fully integrate the binaries into the Linux environment, an integration into systemd (or whatever your distribution of choice uses) makes sense. <a href="https://docs.docker.com/engine/admin/systemd/#manually-creating-the-systemd-unit-files">https://docs.docker.com/engine/admin/systemd/#manually-creating-the-systemd-unit-files</a> hints to according systemd unit files.<br />
<br />
<i><b>Update 2017/10/12</b></i>: the systemd-related related page on docs.docker.com page (linked to above) has changed. To integrate with systemd, copy the files <span style="font-family: "Courier New",Courier,monospace;">docker.socket</span> and <span style="font-family: "Courier New",Courier,monospace;">docker.service</span> from <a href="https://github.com/moby/moby/tree/master/contrib/init/systemd">https://github.com/moby/moby/tree/master/contrib/init/systemd</a> to /<span style="font-family: "Courier New",Courier,monospace;">etc/systemd/system</span> .Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-30684237975753170692017-06-28T23:54:00.004+02:002017-06-29T22:43:17.941+02:00First CE for s390x by DockerToday, Docker CE v17.06 has been released. It offers a series of enhancements as laid out in <a href="https://blog.docker.com/2017/06/announcing-docker-17-06-community-edition-ce/" target="_blank">their announcement blog post</a>. However, there is one more thing: the release comes with s390x binaries. Out of the three major enterprise distributions supported on the mainframe, Docker offers CE for Ubuntu. Particularly, Docker CE is not provided for SLES and RHEL (on any platform). Accordingly, binaries are available for Ubuntu for IBM z Systems.<br />
(Note: See the <a href="http://containerz.blogspot.com/2017/06/docker-ce-for-all-distributions.html" target="_blank">next post</a> if your distribution of choice happens to be something else than Ubuntu.)<br />
<a name='more'></a>These package are just like on other platforms. That includes deployment, so the anchor page is <a href="https://store.docker.com/editions/community/docker-ce-server-ubuntu">https://store.docker.com/editions/community/docker-ce-server-ubuntu</a>. Detailed installation instructions are provided at <a href="https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/">https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/</a>.<br />
BTW, for the first time, <a href="https://store.docker.com/search?architecture=s390x&offering=community&q=&type=edition" target="_blank">Docker store shows a Z logo for an s390x container</a>.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-51396681246072426562017-06-26T21:01:00.005+02:002017-06-30T01:45:00.581+02:00Another Base Image Option: Alpine for s390x<a href="https://alpinelinux.org/" target="_blank">Alpine Linux</a> has <a href="https://alpinelinux.org/posts/Alpine-3.6.0-released.html" target="_blank">announced their new 3.6 release</a> which includes s390x support. Alpine is a minimal Linux distro. Contrary to other distributions, it uses <a href="https://www.musl-libc.org/" target="_blank">musl</a> as runtime library (not glibc), helping in downsizing images.<br />
<br />
A Docker image named <a href="https://hub.docker.com/r/s390x/alpine/tags/" target="_blank">s390x/alpine</a> is available at Docker Hub and while being a fully functional base, it is just 5 MB (!) in size:<br />
<a name='more'></a><br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">root@s8345001:~# docker run -ti s390x/alpine:3.6<br />Unable to find image 's390x/alpine:3.6' locally<br />3.6: Pulling from s390x/alpine<br />Digest: sha256:3297a9b30b666eb3e2a926fbbe49dbdbc60a457deb1487a0df7838a9beb02916<br />Status: Downloaded newer image for s390x/alpine:3.6<br />/ # apk update && apk add curl<br />fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/s390x/APKINDEX.tar.gz<br />fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/s390x/APKINDEX.tar.gz<br />v3.6.1-62-g658c65ba33 [http://dl-cdn.alpinelinux.org/alpine/v3.6/main]<br />v3.6.1-61-gc32140e9a2 [http://dl-cdn.alpinelinux.org/alpine/v3.6/community]<br />OK: 8175 distinct packages available<br />(1/4) Installing ca-certificates (20161130-r2)<br />(2/4) Installing libssh2 (1.8.0-r1)<br />(3/4) Installing libcurl (7.54.0-r0)<br />(4/4) Installing curl (7.54.0-r0)<br />Executing busybox-1.26.2-r5.trigger<br />Executing ca-certificates-20161130-r2.trigger<br />OK: 6 MiB in 15 packages<br />/ # curl containerz.blogspot.com<br /><HTML><br />[...]</span></blockquote>
The minimal size makes the entire user experience very snappy -- it's fun to work with it.<br />
Few specific tools are missing at this time, so booting Alpine is not yet included: to date, Docker is best way to test drive Alpine.<br />
Credits for this image to Bobby Bingham (musl port to s390x), <a href="http://twitter.com/tmh1999" target="_blank">Tuan Hoang</a> (port of Alpine to s390x, starting from scratch), as well as the core Alpine/Docker folks who provide Alpine itself and put it on Docker Hub/Store.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-8580856400312700292017-06-23T16:37:00.002+02:002017-06-30T23:11:27.970+02:00New Docker Engine Release SchemeIt's a while since Docker changed their release naming system. In the past, the version looked like 1.<i>xx</i>, with <i>xx</i> counting up. After 1.13, Docker moved to a pattern that is easy to grasp and accommodates various life cycles of free and commercial releases.<br />
<a name='more'></a>In short:<br />
<ul>
<li>there is Docker CE (Community Edition) for free versions of the Open Source code. There is Docker EE (Enterprise Edition) for their commercially sold engine.</li>
<li>the releases are v<i>YY</i>.<i>MM</i>, where <i>YY</i> designates the year (e.g. 17), and <i>MM</i> stands for the release month. So v17.03 means March 2017.</li>
<li>Docker EE is planned to be released every 3 months (v17.03, v17.06, v17.09, etc).</li>
<li>Docker CE has got two streams: an "edge" version is released every month (v17.03, v17.04, v17.05, etc.). Each version will be superseded by the next month's release. The "stable" version is published every 3 months (v17.03, v17.06, etc) and will be maintained for four months, i.e. give users a one month overlap to upgrade.</li>
</ul>
Their original explanation is at <a href="https://blog.docker.com/2017/03/docker-enterprise-edition/">https://blog.docker.com/2017/03/docker-enterprise-edition/</a>.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-28110658778313126442017-06-22T13:18:00.000+02:002017-06-23T23:00:13.767+02:00Docker in the DatacenterIn this year's DockerCon back in April 2017, then-Docker CEO Ben Golub announced Docker will support the mainframe (and Power).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDg4w9nmfjYgoL6zDPStpckY3v9B2MRkJLe9IObJKLNbJqdai0A5huiC5l4fjYCWSe_RJzjF-WluFw1UDkcWSFp5w_sZ3aNwMqUtsS0Fz_W2t1iMYQV7SrvZt8sMN5hmlV_Id4BYO9cT4/s1600/dockercon-zippy-announcement.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDg4w9nmfjYgoL6zDPStpckY3v9B2MRkJLe9IObJKLNbJqdai0A5huiC5l4fjYCWSe_RJzjF-WluFw1UDkcWSFp5w_sZ3aNwMqUtsS0Fz_W2t1iMYQV7SrvZt8sMN5hmlV_Id4BYO9cT4/s640/dockercon-zippy-announcement.jpg" width="640" /></a></div>
<br />
While technically, this is no news, it means Docker, Inc. will support s390x -- good news and another option besides distribution offerings for those looking at running Docker in production on the mainframe. More details to follow as they are announced and available.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0tag:blogger.com,1999:blog-1252230304278490810.post-36100840066545079962017-06-21T22:53:00.000+02:002017-06-21T22:53:26.540+02:00Relaunch of Containerz BlogAfter quite some inactivity on this blog, I decided to revive this blog. Expect to read regularly about news, how-tos and other information related to container technology on the mainframe. A few posts are in the pipe already.Utz Bacherhttp://www.blogger.com/profile/16434228631659450390noreply@blogger.com0