Jekyll2018-07-29T11:20:39+00:00/Blogging DevOpsPutting day to day DevOps experiences into words. Basic Lessons, Troubleshooting tips and a lot more.
basic setup of kubernetes with kops2018-07-29T03:14:49+00:002018-07-29T03:14:49+00:00/devops/kubernetes/kops/kubectl/terraform/docker/2018/07/29/basic-kops-setup<h1 id="setting-up-a-kubernetes-cluster-with-kops">Setting up a Kubernetes cluster with kops</h1>
<h2 id="preparation">Preparation</h2>
<p>In order to start you will need the following:</p>
<ul>
<li>AWS Accout with IAM user that has Admin privleges</li>
<li>Route53 Zone (e.g. dev.mysite.com)</li>
<li>S3 Bucket (e.g. s3://my-terraform-state)</li>
<li>SSH Keypair (e.g. mykey.pub)</li>
<li>Avialable Subnet CIDR (e.g. 10.60.0.0/16)</li>
<li>Linux client with the following installed on it:
<ul>
<li><code class="highlighter-rouge">awscli</code></li>
<li><code class="highlighter-rouge">terraform</code></li>
<li><code class="highlighter-rouge">kops</code></li>
<li><code class="highlighter-rouge">kubectl</code></li>
</ul>
</li>
</ul>
<h2 id="setting-up-the-cluster-with-kops">Setting up the Cluster with <code class="highlighter-rouge">kops</code></h2>
<p>Run <code class="highlighter-rouge">kops</code> to genrate the <code class="highlighter-rouge">terraform</code> code needed to setup the cluster</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kops create cluster --name=k8s.dev.mysite.com --state=s3://my-terraform-state --dns-zone=dev.mysite.com --out=. --target=terraform --zones="us-east-1a,us-east-1b,us-east-1c" --ssh-public-key="../mykey.pub" --topology private --networking calico
</code></pre></div></div>
<p>Expected output:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kops has set your kubectl context to k8s.dev.mysite.com
Terraform output has been placed into .
Run these commands to apply the configuration:
cd .
terraform plan
terraform apply
</code></pre></div></div>
<p>Run the following command:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>terraform apply
</code></pre></div></div>
<p>To add a <code class="highlighter-rouge">bastion</code> servers do:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kops create instancegroup bastions --role Bastion --subnet utility-us-east-1c --name k8s.dev.mysite.com --state=s3://my-terraform-state
kops update cluster k8s.dev.mysite.com --state=s3://my-terraform-state --yes
</code></pre></div></div>
<p>To figure out the name of the AWS ELB in front on the bustion docker:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>aws elb --region=us-east-2 --output=table describe-load-balancers|grep DNSName.\*bastion|awk '{print $4}'
</code></pre></div></div>
<p>To access the bastion server:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh -A admin@`aws elb --region=us-east-2 --output=table describe-load-balancers|grep DNSName.\*bastion|awk '{print $4}'`
</code></pre></div></div>
<h2 id="setting-up-a-kubernetes-service-and-deployment">Setting up a Kubernetes Service and Deployment</h2>
<p>Lets say you want to setup a Web site service running <code class="highlighter-rouge">nginx</code></p>
<p>First create a <code class="highlighter-rouge">namespace</code> so in future you can run multiple environments on the same cluster</p>
<p>Create a file <code class="highlighter-rouge">namespace.yaml</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apiVersion: v1
kind: Namespace
metadata:
name: mysite-dev
</code></pre></div></div>
<p>Run the following command:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl apply -f ./namespace.yaml`
</code></pre></div></div>
<p>Now lets create a Kubernetes Service - i.e. an external Load Balancer in front of the dockers running <code class="highlighter-rouge">nginx</code> that will be setup momentarily</p>
<p>Create a file: <code class="highlighter-rouge">service.yaml</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apiVersion: v1
kind: Service
metadata:
name: mywebsite
namespace: mysite-dev
spec:
selector:
app: mywebsite
ports:
- protocol: TCP
port: 80
type: LoadBalancer
</code></pre></div></div>
<p>Run:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl -n mysite-dev apply -f ./service.yaml
</code></pre></div></div>
<p>Check your AWS account for a new Loadbalancer that two instances attached - currently un-healthy</p>
<p>Create a <code class="highlighter-rouge">deployment.yaml</code> file:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mywebsite
namespace: mysite-dev
spec:
replicas: 2
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre></div></div>
<p>Run:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl -n mysite-dev apply -f deployment.yaml
</code></pre></div></div>
<h2 id="verification">Verification</h2>
<ol>
<li>Check the <code class="highlighter-rouge">nginx</code> pods have come up by running this command:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl -n mysite-dev get pods
</code></pre></div></div>
<p>At first the pods will appear in status <code class="highlighter-rouge">ContainerCreating</code> and eventually <code class="highlighter-rouge">Running</code></p>
<p>E.g.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME READY STATUS RESTARTS AGE
mywebsite-5fbc6664-b26nf 1/1 Running 0 4m
mywebsite-5fbc6664-n25kl 1/1 Running 0 4m
</code></pre></div></div>
<ol>
<li>
<p>Check the AWS Service Load Blanacer has to Instances <code class="highlighter-rouge">InService</code></p>
</li>
<li>
<p>Open the External URL of the ELB in port 80 - you should get the default nginx page</p>
</li>
</ol>Setting up a Kubernetes cluster with kopsbasic setup of kubeadm on Ubuntu servers2018-05-02T03:14:49+00:002018-05-02T03:14:49+00:00/devops/kubernetes/kubeadm/kubectl/ubuntu/docker/2018/05/02/basic-kubeadm-setup<h1 id="kubeadm-basic-setup"><code class="highlighter-rouge">kubeadm</code> basic setup</h1>
<h2 id="hardware">‘Hardware’</h2>
<ul>
<li>Master - PC with Ubuntu 18.04 LTS</li>
<li>Node - VirtualBox with Ubuntu 16.04 LTS</li>
</ul>
<h2 id="kubeadm-installation"><code class="highlighter-rouge">kubeadm</code> installation</h2>
<p>On both Master and Node:</p>
<p>As <code class="highlighter-rouge">root</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get update
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
</code></pre></div></div>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
</code></pre></div></div>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get update
apt-get install -y kubelet kubeadm kubectl
</code></pre></div></div>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swapoff -a
</code></pre></div></div>
<p>On the Master only:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre></div></div>
<h1 id="kubeadm-initialization"><code class="highlighter-rouge">kubeadm</code> initialization</h1>
<h2 id="reset">reset</h2>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo kubeadm reset
</code></pre></div></div>
<h2 id="setting-up-kubeadm-network-with-flannel">setting up kubeadm network with flannel</h2>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre></div></div>
<p>Should result in something like</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.77.64.37:6443 --token yeg7ui.63ypi932kvkwwp11 --discovery-token-ca-cert-hash sha256:d65f12db10f8dcb4ce8339b1771e409b40e74309eda57eb0bf3a201f19ff5af6
</code></pre></div></div>
<p>Run the commands needed to run the cluster:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre></div></div>
<h2 id="add-node">Add Node</h2>
<p>One the Node server (VirtualBox with Ubuntu 16.04 LTS)</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubeadm join 10.77.64.37:6443 --token yeg7ui.63ypi932kvkwwp11 --discovery-token-ca-cert-hash sha256:d65f12db10f8dcb4ce8339b1771e409b40e74309eda57eb0bf3a201f19ff5af6
</code></pre></div></div>
<h1 id="network-setup---flannel">Network setup - flannel</h1>
<h2 id="apply-network---kube-flannel">Apply network - kube flannel</h2>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
</code></pre></div></div>
<h2 id="checking-the-cluster-is-up">Checking the cluster is up</h2>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>➜ sudo kubectl get nodes
</code></pre></div></div>
<p>Should result in something like</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME STATUS ROLES AGE VERSION
gilad-thinkpad-t470 Ready master 15m v1.10.2
gilad-virtualbox Ready <none> 12m v1.10.2
</code></pre></div></div>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>➜ sudo kubectl get pods --all-namespaces
</code></pre></div></div>
<p>Should result in something like</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-gilad-thinkpad-t470 1/1 Running 0 14m
kube-system kube-apiserver-gilad-thinkpad-t470 1/1 Running 0 14m
kube-system kube-controller-manager-gilad-thinkpad-t470 1/1 Running 0 14m
kube-system kube-dns-86f4d74b45-tncrk 3/3 Running 0 15m
kube-system kube-flannel-ds-49w8w 1/1 Running 0 4m
kube-system kube-flannel-ds-6trvl 1/1 Running 0 4m
kube-system kube-proxy-fl8cv 1/1 Running 0 12m
kube-system kube-proxy-n4rsv 1/1 Running 0 15m
kube-system kube-scheduler-gilad-thinkpad-t470 1/1 Running 0 14m
</code></pre></div></div>
<h1 id="sanity-test">Sanity test</h1>
<h2 id="create-simple-pod">Create simple pod</h2>
<p>Create <code class="highlighter-rouge">yaml</code> file - e.g. <code class="highlighter-rouge">just-nginx.yaml</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apiVersion: v1
kind: Pod
metadata:
name: my-nginx-pod
spec:
containers:
- name: nginx-container
image: nginx
</code></pre></div></div>
<p>Run the container</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo kubectl create -f just-nginx.yaml
</code></pre></div></div>
<p>The describe it</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>➜ sudo kubectl describe pod my-nginx-pod
</code></pre></div></div>
<p>You should get something like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Name: my-nginx-pod
Namespace: default
Node: gilad-virtualbox/10.77.64.158
Start Time: Wed, 02 May 2018 12:08:13 +0300
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.28
Containers:
nginx-container:
Container ID: docker://a75f1acc8984a32a7c09b4ee8c2b037bece4cc446037c26fba0e38d3219c9d45
Image: nginx
Image ID: docker-pullable://nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 02 May 2018 12:08:25 +0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rt7c6 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-rt7c6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rt7c6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned my-nginx-pod to gilad-virtualbox
Normal SuccessfulMountVolume 3m kubelet, gilad-virtualbox MountVolume.SetUp succeeded for volume "default-token-rt7c6"
Normal Pulling 3m kubelet, gilad-virtualbox pulling image "nginx"
Normal Pulled 2m kubelet, gilad-virtualbox Successfully pulled image "nginx"
Normal Created 2m kubelet, gilad-virtualbox Created container
Normal Started 2m kubelet, gilad-virtualbox Started container
</code></pre></div></div>
<p>Note the <strong>IP</strong></p>
<p>Go to to the webserver <code class="highlighter-rouge">http://<IP></code></p>
<p>E.g. <a href="http://10.244.1.28">http://10.244.1.28</a></p>kubeadm basic setupAnsible - Role for toggling Pagerduty Service Maintenance window2017-12-03T03:14:49+00:002017-12-03T03:14:49+00:00/devops/ansible/monitoring/pagerduty/alerts/2017/12/03/ansible-role-pagerduty<h1 id="bring-your-service-in-pagerduty-into-maintenance-mode-and-out-of-it---in-an-ansible-role">Bring your Service in Pagerduty into maintenance mode, and out of it - in an <code class="highlighter-rouge">ansible</code> role</h1>
<p>See Ansible Galaxy <a href="https://galaxy.ansible.com/giladju/pagerduty_maintenance/">here</a></p>
<p>Or go directly to the git <a href="https://github.com/giladju/pagerduty_maintenance">repo</a></p>Bring your Service in Pagerduty into maintenance mode, and out of it - in an ansible roleAnsible - Task looping over dictionary tip2017-10-19T03:14:49+00:002017-10-19T03:14:49+00:00/devops/ansible/2017/10/19/ansible-looping-over-dictionary-and-other-tips<h2 id="ansible-tip---looping-over-dictionaries-example-with-aws-launch-configurations">Ansible tip - Looping over dictionaries, example with AWS Launch Configurations</h2>
<h3 id="prerequisites">Prerequisites</h3>
<p>The tip in this post assumes you are working with Ansible AWS auto-discovery
See the documentation <a href="http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html">here</a></p>
<h3 id="scenario">Scenario</h3>
<p>In this example we have a system that works solely with AWS Auto Scale Groups, in effect the list of Launch Configurations (<code class="highlighter-rouge">lcs</code>) holds the list of services:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lcs:
- role: frontend
- role: management
- role: backend
- role: algo
- role: zookeeper
</code></pre></div></div>
<p>The <code class="highlighter-rouge">frontend</code> and the <code class="highlighter-rouge">management</code> services are customer facing and will run behind an ELB, while the <code class="highlighter-rouge">backend</code> and <code class="highlighter-rouge">algo</code> services will run internally and with an Application Load Balancer.
The <code class="highlighter-rouge">zookeeper</code> service will run internally, but not behind any load balancer</p>
<p>So the detailed <code class="highlighter-rouge">lcs</code> dictionary looks like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lcs:
- role: frontend
subnet: private
type: t2.micro
volumes:
- device_name: /dev/sda1
volume_size: 8
groups: [allow_vpc, allow_ssh]
elb_location: "public-{{ envname }}"
elb: True
- role: management
subnet: private
type: t2.micro
volumes:
- device_name: /dev/sda1
volume_size: 8
groups: [allow_vpc, allow_ssh]
elb_location: "public-{{ envname }}"
elb: True
- role: backend
priority: "1"
subnet: private
type: t2.micro
volumes:
- device_name: /dev/sda1
volume_size: 8
groups: [allow_vpc, allow_ssh]
alb_location: "private-{{ envname }}"
alb: True
- role: algo
priority: "1"
subnet: private
type: t2.micro
volumes:
- device_name: /dev/sda1
volume_size: 8
groups: [allow_vpc, allow_ssh]
alb_location: "private-{{ envname }}"
alb: True
- role: zookeeper
subnet: private
type: t2.micro
volumes:
- device_name: /dev/sda1
volume_size: 8
zk: True
</code></pre></div></div>
<h3 id="ansible-role-tasks-looping-through-the-above-dictionary">Ansible Role Tasks looping through the above dictionary</h3>
<p>In order to create the corresponding Launch Configurations the following three loops need to be created:
Note the filtering of the dictionary using <code class="highlighter-rouge">|</code> and <code class="highlighter-rouge">selectattr</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- name: create customer facing services launchconfigurations
with_items: "{{ lcs|selectattr('elb', 'defined')|selectattr('elb')|list }}"
ec2_lc:
name: "LC-{{ envname }}-{{ item.role }}"
.
.
- name: create internal services launchconfigurations
with_items: "{{ lcs|selectattr('alb', 'defined')|selectattr('alb')|list }}"
ec2_lc:
name: "LC-{{ envname }}-{{ item.role }}"
.
.
- name: create zookeeper launchconfiguration
with_items: "{{ lcs|selectattr('zk', 'defined')|selectattr('zk')|list }}"
ec2_lc:
name: "LC-{{ envname }}-{{ item.role }}"
.
.
</code></pre></div></div>
<p>So assuming our Environment name is <code class="highlighter-rouge">staging</code> we will get 5 Launch Configurations:</p>
<ul>
<li><code class="highlighter-rouge">LC-staging-frontend</code></li>
<li><code class="highlighter-rouge">LC-staging-management</code></li>
<li><code class="highlighter-rouge">LC-staging-backend</code></li>
<li><code class="highlighter-rouge">LC-staging-algo</code></li>
<li><code class="highlighter-rouge">LC-staging-zokeeper</code></li>
</ul>Ansible tip - Looping over dictionaries, example with AWS Launch ConfigurationsAnsible - Finding the server size from Dynamic Inventory variables2017-08-30T03:14:49+00:002017-08-30T03:14:49+00:00/devops/ansible/2017/08/30/finding-the-server-size-from-dynamic-inventory-variables<h2 id="ansible-tip---finding-the-server-size-from-dynamic-inventory-variables">Ansible tip - finding the server size from Dynamic Inventory variables</h2>
<h3 id="prerequisites">Prerequisites</h3>
<p>The tip in this post assumes you are working with Ansible AWS auto-discovery
See the documentation <a href="http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html">here</a></p>
<h3 id="scenario">Scenario</h3>
<p>In the case you would like to “discover” what is the size of the server that ansible is working on in the playbook, and accordingly set a variable you should do the following:</p>
<p>Add to your tasks in the role a <code class="highlighter-rouge">set_fact</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- name: set_fact ec2 server type
set_fact:
ec2_server_type: "{{ vars.hostvars[ansible_host].ec2_instance_type | replace('.','_') }}"
</code></pre></div></div>
<p>The above finds the current <code class="highlighter-rouge">ansible_host</code>’s <code class="highlighter-rouge">ec2_instance_type</code></p>
<p>Note the <code class="highlighter-rouge">replace</code> at the end</p>
<p>This comes to replace the <em>dot</em> with an <em>underscore</em> for better searching of dictionary variables - see next post</p>Ansible tip - finding the server size from Dynamic Inventory variablesAnsible - creating a list of IPs and ports2017-05-30T03:14:49+00:002017-05-30T03:14:49+00:00/devops/ansible/2017/05/30/ansible-list-of-IPs-ports<h2 id="ansible-tip---creating-a-list-of-ipsports-with-jinja2-loops">Ansible tip - creating a list of IPs:Ports with Jinja2 loops</h2>
<p>While working on setting up an <code class="highlighter-rouge">ansible-playbook</code> for a Kafka cluster, a few challenges came up. In this, setting up the <code class="highlighter-rouge">zookeeper.connect</code> value in the Kafka server configuration file: <code class="highlighter-rouge">"/server.properties"</code></p>
<p>The line should look something like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>zookeeper.connect=kafkaserver1:2181,kafkaserver2:2181,kafkaserver3:2181
</code></pre></div></div>
<h2 id="jinja2-loop">jinja2 loop</h2>
<p>The simple loop will look something like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{% for host in groups['kafkaservers'] %}
{{ host }}:2181,
{% endfor %}
</code></pre></div></div>
<p>Thy issue here is that you will get redundant comma at the end of the line like so:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>zookeeper.connect=kafkaserver1:2181,kafkaserver2:2181,kafkaserver3:2181,
</code></pre></div></div>
<p>In order to resolve this we will use <code class="highlighter-rouge">if not loop.last</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{% for host in groups['kafkaservers'] %}
{{ host }}:2181
{% if not loop.last %}
,
{% endif %}
{% endfor %}
</code></pre></div></div>
<h2 id="ansible-example-task">Ansible example task</h2>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- name: Set zookeeper.connect in server.properties
lineinfile:
path: "{{ kafka_server_config_file_path }}"
regexp: "^zookeeper.connect"
line: "zookeeper.connect={% for host in groups['kafkaservers'] %}{{ host }}:2181{% if not loop.last %},{% endif %}{% endfor %}"
become: yes
</code></pre></div></div>Ansible tip - creating a list of IPs:Ports with Jinja2 loopsContinuous Integration with AWS tools only2017-04-10T03:14:49+00:002017-04-10T03:14:49+00:00/devops/%22continuous/integration%22/%22aws/codepipeline%22/2017/04/10/ci-with-aws-codepipeline<h2 id="the-goal">The Goal</h2>
<p>Working in a professional service organization, one might be called to the helm to help out with a task for the common good, in this case setting a build process for a straight forward <code class="highlighter-rouge">node.js</code> project.</p>
<p>Being familiar with setting up a build and deploy process with Jenkins, this would be the quickest way to go. But the goal is to learn and bring in to the company some knowhow. So together with the goal of saving a few $ we opted to go with AWS tools.</p>
<p>Presumably it would be straight forward to setup a build and deploy process from git to running server(s) with CodePipeline.</p>
<p>So here are my experiences so far</p>
<h2 id="assumptions">Assumptions</h2>
<p>As the saying goes, to <em>assume</em> is making an <em>ass</em> of <em>u</em> and <em>me</em> …</p>
<p>So I assumed that the process of getting code from <a href="github.com">github</a> to AWS is pretty similar to the way Jenkins accesses the code, i.e. <code class="highlighter-rouge">ssh</code></p>
<p>I also assumed, correctly it seems, that there is a build process and a deployment process. But I’m jumping ahead of myself</p>
<h2 id="initial-setup---aws-and-github">Initial setup - AWS and Github</h2>
<p>It was great to see that AWS can pull code from Github.</p>
<p><img src="/assets/aws-github.png" alt="" /></p>
<p>But when I pulled down the list of git repos I can access, I only saw my private repos, not the company’s. Not until I was made Admin of the repos <code class="highlighter-rouge">orginization\repo</code> was I able to see it on the list.</p>
<p>This will do for now, but truthfully is a show-stopper. Needing a private github account in order to access a git repo is bad enough, but requiring the user be an <code class="highlighter-rouge">admin</code> is opening a room for error, that probably is a deterrent for a lot of Build Managers, certainly their leads.</p>
<h2 id="moving-on---building">Moving on - Building</h2>
<p>Considering there is not much of a build in this case, i.e. the only build step we need is:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tar cvfz build-`date +%Y%m%d-%H%M`.tgz
</code></pre></div></div>
<p>And the resulting artifact would be: <code class="highlighter-rouge">build-*.tgz</code></p>
<p>I thought this step would go smoothly.</p>
<p>It didn’t</p>
<p>It seems we need and <em>IAM</em> role, specifically a <em>Service Role</em>. My AWS user does not have permissions to create (or list) IAM roles, so I need an Admin to create a role for the build process and then grant my Listing permission on IAM roles.</p>
<p>There is a lot of to be said regarding limiting access in AWS, but it would be nice to have pre-set IAM Role/Policy that allows a non-admin user to actully complete a CodeBuild - CodeDeploy/Elastic-Beanstalk build/deploy process</p>
<p>So this is where I am stuck for now, waiting for my friendly Admin to wake up …</p>
<h2 id="building---after-getting-permissions">Building - after getting permissions</h2>
<p>Well, only after getting AWS Admin permissions could I continue.</p>
<p>So now the <code class="highlighter-rouge">AWS CodePipeline</code> has the first two of three Continuous Deployment steps in place:</p>
<ul>
<li>Source</li>
<li>Build</li>
</ul>
<p>Took a bit of documentation reading, but in the end, a basic “build” step was put in place that does the basics of packing all the <code class="highlighter-rouge">node.js</code> code in a zip file</p>
<h2 id="deployment">Deployment</h2>
<p>The Deployment step requires you setup an <code class="highlighter-rouge">AWS ElasticBeanstalk</code> Application and Environment. Pretty trivial for someone familiar with AWS EB, but might require a bit of tinkering for someone new to the module.</p>
<h2 id="conclusion">Conclusion</h2>
<p><img src="/assets/aws-pipeline.png" alt="" /></p>
<p>First of all, one has to admit, it works - we now have a Continuous Integration Pipeline up and running, without redundant servers (i.e. Jenkins).</p>
<p>I.e. completely based on AWS modules, and minimum cost, presumably only paying extra for the Build time (above the cost of the environment servers)</p>
<h2 id="open-issues">Open issues</h2>
<ol>
<li>The integration with GitHub required my user be an Admin on the specific git repo. Considering this is my private github account that is part of a company Continuous Integration process - this is a big no no</li>
<li>The build process is pretty basic and has no cleanup process - i.e. old artifacts will remain in S3 until someone goes in and cleans them up</li>
<li>The “deployment” process in Elastic Beanstalk has no mechanism of “pausing” the environment so as not to waste money in EC2 uptime</li>
</ol>The GoalDevops insights from getting your cable tv to work2017-02-20T09:14:49+00:002017-02-20T09:14:49+00:00/devops/learn/tips/troubleshooting/2017/02/20/insights-from-getting-my-cable-tv-to-work<h2 id="background">Background</h2>
<blockquote>
<p>I will not fix your computer</p>
</blockquote>
<p>Says the <a href="http://www.thinkgeek.com/product/388b/">ThinkGeek t-shirt</a></p>
<p>Us devops dudes are required to fix all the computer issues for the extended family, and quite rightly, as we are most qualified to figure out what has gone wrong on the PC, home network, or excel sheet. Well, not the excel sheet, but you get the idea.</p>
<p>Sarcasm aside, I recently encountered issues with our home Internet-based cable TV provider, and resolving these issues brought me to write this post.</p>
<p>Recently one of the Israeli cellular providers, namely Cellcom, has branched into Home TV providing, they have an interesting solution that is based on a combination of DVB-T and Internet connectivity.</p>
<p>My network setup at home is based on a ADSL connection, a router with Wifi for the ground floor, a connection to the top floor through power line adapters.</p>
<p>On the bottom floor the Cellcom box was connected directly to the ADSL router and on the top floor the second Cellcom box is connected to a router that is connected to the power line adapter.</p>
<p>One more important piece of information is that the telephone line over which the ADSL is carried on the bottom floor was added after we moved into the house, and is a goes through a few junction boxes from the main telephony switch at the house entrance.</p>
<p>At first both Cellcom boxes worked well. The VOD feature worked fine on the box downstairs near the ADSL router and on the one upstairs.</p>
<p>At some stage, the top floor cellcom box stopped working well over the Internet, the DVB-T reception remained, and the Internet connection test, on the box, showed Good connectivity.</p>
<h2 id="getting-support">Getting support</h2>
<p>I called Cellcom support, to their credit they answer pretty quickly and are very obliging. The tech-support guy claimed my Internet connection was under par. Cellcom requires we have a 40Mbps connection at least - we have a 100Mbps connection. After a bit of negotiation I was convinced to take the problematic Cellcom box downstairs and switch it with the one that was working.</p>
<p>At this stage, the tech support escalated the issue and called in his supervisor.</p>
<p>On my side I realized two things.</p>
<ul>
<li>The box originally from the top floor was not operating well, i.e. no VOD reception, while the other had VOD working on it, with no problems</li>
<li>The Cellcom tech observation that our Internet connection was under par was correct, running a speed test on a computer connected directly to the ADSL router was showing 16Mbps, when I was expecting something closer to the 100Mbps we are paying for.</li>
</ul>
<h2 id="solutions">Solutions</h2>
<p>After reconnecting with Cellcom support, this time with the a more senior tech, we managed to reset the connection of the 2nd Cellcom box. To be frank, I’m not sure what was done, if it was a reset on the server side, or a remote reset on the client side, but either way, the problem was resolved. Meaning the top-floor box was now showing VOD content as well as the bottom-floor box.
First problem solved.</p>
<p>I was still getting poor Internet connectivity, and as there are more end users at home, as you might imagine, mainly my boys playing endlessly on Minecraft, that needed to be resolved as well.</p>
<p>If you recall, I mentioned above that the actual phone line on the ground floor was a relatively recent addition, so I thought of switching the ADSL router and second router. The top floor had a telephony socket that seemed better connected, so I moved the ADSL router there, and through the same power line adapters connected it downstairs to the ground-floor router.</p>
<p>Running speed-tests on computers connected to both routers yielded much better speeds and reception in all the house is now has much better Internet connectivity.</p>
<h2 id="insights">Insights</h2>
<p>So what does all of this have to do with Devops?</p>
<p>Troubleshooting takes a major role in getting the job done in our field. Figuring out what is the problem can be tricky, and doing it right leads to a swifter solution.</p>
<p><strong>Get support</strong></p>
<p>There is a lot to know in our field, and asking for help is part of the game, in recent tasks I’ve been asked to resolve I approached tech support of AWS, Atlassian and many more. Their answers were always helpful and got me to solve the issues swiftly</p>
<p><strong>Don’t Assume</strong></p>
<p>There is the famous saying:</p>
<blockquote>
<p>When you assume you make an <em>Ass</em> of <em>U</em> and <em>Me</em></p>
</blockquote>
<p>Base your conclusions on your findings, not your gut feelings.
Many years back, an senior engineer told me the 90% of Linux issues are those of Permissions and Ownership. Same goes to network connectivity issues in AWS, you’d best start looking for solutions in the Security Groups. These insights are all good and well, and come following a lot of hard earned experience, but still, keep your mind open.</p>
<p>In this case, there was one symptom, i.e. the VOD feature not working, knowing the Internet connection was poor I could assume this was the issue, but the underlying circumstances were not identical, which brings me to the second insight:</p>
<p><strong>A/B tests</strong></p>
<p>Only after bringing the box to the same position in the home network was I able to say to the Cellcom rep:</p>
<blockquote>
<p>Look, box one working, box two not working</p>
</blockquote>
<p>A works
B doesn’t</p>
<p>When you are troubleshooting an issue, find out when the issue arose, check if you can run the system <em>without</em> the issue - and see what changed.</p>
<p><strong>Follow through</strong></p>
<p>Any Tennis coach will tell you that you need to follow through on your swing. Same here, so you started of with one symptom (no VOD) and discovered another underlying issue (poor Internet connectivity)?
Resolve them both, don’t leave an open issue dangling.</p>BackgroundSetting up SSH Jump server2017-02-11T09:14:49+00:002017-02-11T09:14:49+00:00/devops/learn/tips/ssh/2017/02/11/setting-up-jump-server<h2 id="introduction">Introduction</h2>
<p>Having to work with remote servers frequently you will need to gain access to them quickly and easily. But what to do when security constraints prevent access to the server directly from your workstation?</p>
<p>In many cases the Security department will not allow you to have private keys on your servers in the cloud, so that the user <code class="highlighter-rouge">gilad</code> on the jump server should not have any private keys:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[gilad@jump ~]$ ls .ssh
authorized_keys known_hosts
</code></pre></div></div>
<p>The solution is ssh forwarding through the Jump server in your data center / cloud network</p>
<p><img src="/assets/ssh-jump-host.png" alt="" /></p>
<h2 id="adding-keys-to-the-ssh-agent">Adding keys to the ssh-agent</h2>
<p>On your PC, do the following:</p>
<p>In order for the <code class="highlighter-rouge">ssh-agent</code> to handle the forwarding of your keys you will need to <code class="highlighter-rouge">add</code> them</p>
<p>Add the following to your <code class="highlighter-rouge">~/.bashrc</code> (or the corresponding init script of your user)</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh-add ~/.ssh/server.pem
ssh-add ~/.ssh/id_rsa
ssh-add ~/.ssh/jumphost-key.pem
ssh-add ~/.ssh/test-key.pem
</code></pre></div></div>
<h2 id="configure-your-ssh-client-to-forward-keys-when-accessing-the-jump-host">Configure your ssh client to Forward keys when accessing the jump host</h2>
<p>Still on your PC:</p>
<p>Add the following to the <code class="highlighter-rouge">~/.ssh/config</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host jump
HostName jump.domain.com
User gilad # jump_host_user
ForwardAgent yes
</code></pre></div></div>
<h2 id="make-sure-your-public-key-is-placed-correctly-on-the-jump-server">Make sure your public key is placed correctly on the Jump server</h2>
<p>To generate a public key from your private key:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh-keygen -y -f ~/.ssh/jumphost-key.pem
</code></pre></div></div>
<p>You shoud get a public key:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCQwxmLonT5JrZUklPCm9N2PycyhJiGioNwfMLgsA2OYqI9ndoMj7eNK4yH3r32M4cBFgG8Y3Nw9hLhAXIA2GfuKSiSfdGepAn6Un/zm1j4LwKZGA/1wdekhIL8pmkNdLZU/N4iAdAvZJ3WPFqaLmFlz7t9AuoPodCF7dPFStBPBcxys17GruxhqnCeoXxjs59P1MsOmucu2dU85yfbKDEinVxuHI5mfH+AEm0zB2GZdBnUUs1gFmm7VT743ELINjVGF36zrtQZUj90ZxirQtfhdJrGjW83hrvlY+6ACuGZcuAGiOm0BhT6LTaUHUU4l0AziWTWgbPzEITQyGQ16hmR
</code></pre></div></div>
<p>On the Jump server:</p>
<p>Place the public key on the servers in the user’s home directory <code class="highlighter-rouge">.ssh</code> folder in the <code class="highlighter-rouge">authorized_keys</code> file</p>
<p>Place the public keys on the rest of the servers as defined in the this table:</p>
<table>
<thead>
<tr>
<th>user</th>
<th>server</th>
<th>public key of</th>
</tr>
</thead>
<tbody>
<tr>
<td><code class="highlighter-rouge">gilad</code></td>
<td>jump.domain.com</td>
<td><em>jumphost-key</em></td>
</tr>
<tr>
<td><code class="highlighter-rouge">ubuntu</code></td>
<td>192.168.0.144</td>
<td><em>test-key</em></td>
</tr>
<tr>
<td><code class="highlighter-rouge">ubuntu</code></td>
<td>172.16.1.123</td>
<td><em>server</em></td>
</tr>
</tbody>
</table>
<h2 id="ssh-to-the-jump-server">ssh to the Jump server</h2>
<p><code class="highlighter-rouge">ssh jump</code></p>
<p>You should get a prompt on the server:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[gilad@jump ~]$
</code></pre></div></div>
<h2 id="ssh-onwards-to-desired-server">ssh onwards to desired server</h2>
<p>E.g.</p>
<p><code class="highlighter-rouge">ssh ubuntu@192.168.0.144</code></p>
<p>Will allow you access based on the <code class="highlighter-rouge">test-key</code> that does <strong>not</strong> exist on the jump server</p>IntroductionLearning new stuff with Docker, neo4j export/import2017-02-06T09:14:49+00:002017-02-06T09:14:49+00:00/devops/learn/tips/2017/02/06/Learning-new-stuff-with-Docker<h2 id="introduction">Introduction</h2>
<p>Being in a consulting position, I often find myself being asked to perform tasks in technical areas I know close to nothing about.
For example I was asked to setup a Neo4J server in Amazon, and figure out how to export and import data from a developer’s laptop with a local setup of the database.</p>
<p>I could spend days learning how Neo4J works, but instead I read up a bit on the official site: https://neo4j.com, and proceeded to run <code class="highlighter-rouge">neo4j</code> <a href="https://neo4j.com/developer/docker">on a docker container</a></p>
<p>If you are new to <code class="highlighter-rouge">docker</code> take the time to get familiar with <a href="https://blog.docker.com/2016/05/docker-101-getting-to-know-docker/">Docker images and containers</a></p>
<p>In this post I will detail the following steps:</p>
<ul>
<li>Preparing to run <code class="highlighter-rouge">neo4j</code> docker containers</li>
<li>Running a clean <code class="highlighter-rouge">neo4j</code> docker container</li>
<li>In <code class="highlighter-rouge">neo4j</code> setting up a demo database</li>
<li>Export the demo database</li>
<li>Run a second clean <code class="highlighter-rouge">neo4j</code> docker container</li>
<li>Import the dumped database</li>
</ul>
<h2 id="running-the-neo4j-docker">Running the <code class="highlighter-rouge">neo4j</code> docker</h2>
<h3 id="install-docker-on-ubuntu">Install Docker on Ubuntu</h3>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get -y install docker-engine
sudo usermod -aG docker $USER
sudo reboot
</code></pre></div></div>
<h3 id="prepare-your-file-system-for-two-docker-containers">Prepare your file system for two docker containers</h3>
<p>Prepare Logs and Data directories for first docker</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir -p ~/neo4j/data ~/neo4j/logs
mkdir -p ~/neo4j/data2 ~/neo4j/logs2
</code></pre></div></div>
<h3 id="docker-pull-neo4j">Docker pull <code class="highlighter-rouge">neo4j</code></h3>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker pull neo4j
</code></pre></div></div>
<h3 id="make-sure-all-docker-containers-are-removed">Make sure all docker containers are removed</h3>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker rm `docker ps -a -q`
</code></pre></div></div>
<h3 id="run-vanialla-neo4j-docker">Run vanialla <code class="highlighter-rouge">neo4j</code> docker</h3>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/logs:/logs \
neo4j
</code></pre></div></div>
<p>You should get:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Starting Neo4j.
2017-02-06 07:59:34.533+0000 INFO No SSL certificate found, generating a self-signed certificate..
2017-02-06 07:59:35.686+0000 INFO Starting...
2017-02-06 07:59:36.736+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-02-06 07:59:41.491+0000 INFO Started.
2017-02-06 07:59:43.586+0000 INFO Remote interface available at http://localhost:7474/
</code></pre></div></div>
<p>Open <a href="http://localhost:7474/">http://localhost:7474</a> in your browser</p>
<p>(Ignore the temporary red that might appear, you should be redirected to http://localhost:7474/browser)</p>
<p><img src="/assets/neo4j-initial-login.png" alt="neo4j-initial-login" /></p>
<!---
{:class="img-responsive"})
-->
<p>Login with password <code class="highlighter-rouge">neo4j</code> and set a new password</p>
<h3 id="creating-a-demo-database">Creating a demo database</h3>
<p>At this stage there is no data on server, in order to create something, we will use the demo Movie/Actor database.</p>
<p>In the Editor type: <code class="highlighter-rouge">:play movie graph</code> like so:</p>
<p><img src="/assets/neo4j-editor-play-movie.png" alt="neo4j-editor-play-movie" /></p>
<p>And press Enter</p>
<p>Press the <strong>Next</strong> arrow on the right</p>
<p>Follow the instructions:</p>
<ol>
<li>Click on the code block</li>
<li>Notice it gets copied to the editor above</li>
<li>Click the editor’s play button to execute</li>
<li>Wait for the query to finish</li>
</ol>
<p>The database is created and presented</p>
<p><img src="/assets/neo4j-demo-db-created.png" alt="neo4j-demo-db-created" /></p>
<h3 id="export-the-demo-database">Export the demo database</h3>
<p>In order to export the database you will need to stop the <code class="highlighter-rouge">neo4j</code> server, but stopping the <code class="highlighter-rouge">neo4j</code> server on a running docker will essentially stop the docker container completely.</p>
<p>So we will do the following:</p>
<ol>
<li>Stop the docker container - in the terminal from which you ran the container press <code class="highlighter-rouge">CTRL c</code></li>
<li>Run a container with the same data, but this time run <code class="highlighter-rouge">/bin/bash</code> on it - allowing for interactive access to the server:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/logs:/logs \
-i -t neo4j /bin/bash
</code></pre></div></div>
<p>You should get a <code class="highlighter-rouge">bash</code> prompt:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash-4.3#
</code></pre></div></div>
<p>In it type:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./bin/neo4j-admin dump --database=graph.db --to=/data/2017-02-05.dump
</code></pre></div></div>
<p>Exit the docker</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash-4.3# exit
</code></pre></div></div>
<p>The docker container will stop</p>
<h2 id="run-a-second-clean-neo4j-docker-container">Run a second clean <code class="highlighter-rouge">neo4j</code> docker container</h2>
<p>Let’s make sure there are no containers, running or stopped</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker rm -f `docker ps -a -q`
</code></pre></div></div>
<p>And let’s double check we actually load a clean <code class="highlighter-rouge">neo4j</code> server</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo rm -rf ~/neo4j/data2/*
</code></pre></div></div>
<p>Now, run the second container</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data2:/data \
--volume=$HOME/neo4j/logs2:/logs \
neo4j
</code></pre></div></div>
<p>Again - you should get</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Starting Neo4j.
2017-02-06 10:30:18.970+0000 INFO No SSL certificate found, generating a self-signed certificate..
2017-02-06 10:30:20.100+0000 INFO Starting...
2017-02-06 10:30:20.917+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-02-06 10:30:24.738+0000 INFO Started.
2017-02-06 10:30:26.448+0000 INFO Remote interface available at http://localhost:7474/
</code></pre></div></div>
<p>After logging in - you will note there is not data defined</p>
<p><img src="/assets/neo4j-no-data.png" alt="neo4j-no-data" /></p>
<p>Stop the docker container - in the terminal from which you ran the container press <code class="highlighter-rouge">CTRL c</code></p>
<p>On your PC copy the <code class="highlighter-rouge">dump</code> file to the “volume” of the new docker container:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cp ~/neo4j/data/2017-02-05.dump ~/neo4j/data2/
</code></pre></div></div>
<p>Now run the a container with a <code class="highlighter-rouge">bash</code> prompt:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data2:/data \
--volume=$HOME/neo4j/logs2:/logs \
-i -t neo4j /bin/bash
</code></pre></div></div>
<p>In the container prompt:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash-4.3# bin/neo4j-admin load --from=/data/2017-02-05.dump --database=graph.db --force
</code></pre></div></div>
<p>Exit the docker</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash-4.3# exit
</code></pre></div></div>
<p>And start a container with <code class="highlighter-rouge">neo4j</code> running</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data2:/data \
--volume=$HOME/neo4j/logs2:/logs \
neo4j
</code></pre></div></div>
<p>Go to <a href="http://localhost:7474/browser/">http://localhost:7474/browser/</a></p>
<p>And note that there is a database in place:</p>
<p><img src="/assets/neo4j-with-data.png" alt="neo4j-with-data" /></p>
<p>Click the * Star to the right of <code class="highlighter-rouge">Movie</code> & <code class="highlighter-rouge">Person</code></p>
<p><img src="/assets/neo4j-click-star.png" alt="neo4j-click-star" /></p>
<p>The graphic representation of the database should appear</p>
<p><img src="/assets/neo4j-graph-database.png" alt="neo4j-graph-database" /></p>
<h1 id="summary">Summary</h1>
<p>Learning something new can be a lot of a hard work, with the help of <code class="highlighter-rouge">docker</code> setting up a service is pretty quick, and in this case the specific task of learning how to export/import is now understood without the need to learn a whole lot of installation and configuration material.</p>Introduction