My Adventures in Coding

April 27, 2018

Redis – Setup a Test Cluster on Ubuntu

Filed under: Linux,Redis,Ubuntu — Brian @ 4:49 pm
Tags: , , , ,

I have been working with Redis lately and in my local development environment I was just testing against a single node setup of Redis. However, this did not give me the functionality I wanted to test against since our production environment uses a Redis cluster of six nodes.

Connecting to a Redis cluster requires a different configuration from a single Redis instance and is also functionally different. For example, a standard six node cluster of Redis is actually three pairs of Master-Replica where each pair stores a third of the data. So for those reasons I really wanted to test my application setup against a Redis cluster in my local development environment, which was Ubuntu.

At the the time of this post I was using Ubuntu 16.04 and Redis 4.0.9.

So here is how to spin up a six node test cluster of Redis on a single Ubuntu instance.

Download and Build Redis

wget http://download.redis.io/releases/redis-4.0.9.tar.gz
tar -zxvf redis-4.0.9.tar.gz
cd redis-4.0.9
make

Create Six Redis Nodes

First, edit the create-cluster startup file:

cd utils/create-cluster
vi create-cluster

In my setup, I changed the following:

  • Port – set to 7000 (This will be the starting port)
  • Timeout – set to 10000 (I found this handy in my dev enivornment)
  • Nodes – set to 6 (We use a 6 node cluster in production, so I wanted the same)
  • Replicas – set to 1 (This means each master with have one replica, 6 nodes = 3 Masters and 3 Replicas)

Edit_Create_Cluster_Script

Now we can start all six nodes:

./create-cluster start

You will see the nodes have been started:
Redis-Start-Cluster

If you do an “ls” in the folder you will notice that some files have been created:

  • Each node has a log file (For example: 7001.log)
  • Each node has a cluster configuration file (For example: nodes-7001.conf)

Cluster_config_files

You can also confirm the nodes are running by connecting to a node using the “redis-cli” command line tool by specifying the port of a node:

cd redis-4.0.9/src
./redis-cli -p 7001

Redis-Cli-Cluster-Key-Added

Join the Nodes as a Cluster

Even though at this stage we have six Redis nodes, they know nothing about each other. So the next step is to join these nodes as a cluster.

Prerequisites: Ruby and Redis Gem

The script that connects the nodes as a cluster is written in Ruby and requires the Redis Gem, which requires Ruby 2.2.2 or higher.

Update Ruby

sudo apt-get install ruby-full

Install Redis Gem

sudo gem install redis

To create a cluster where you only need to connect to it from the local machine you can use:

cd redis-4.0.9/src
./redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006

NOTE: If you plan to connect to your Redis cluster remotely, you will need to use the public IP of the machine instead of 127.0.0.1.

When you run the join cluster script it will show you the generated configuration and then ask you to confirm the setup:
Redis-Join-Nodes-To-Cluster_001

After you answer “yes” and press enter, the nodes will be joined in a cluster. If you see the following the cluster creation was successful:
Redis-Join-Nodes-Success

That is all! You now have a functional Redis Cluster that you can connect to from your application!

For more information on Redis Clusters I recommend reading the Redis Cluster Tutorial.

Advertisements

April 1, 2018

Java – Using SQLServerBulkCopy in Java with an InputStream

Filed under: Java,SQL Server — Brian @ 9:59 pm
Tags: , , ,

Up until recently, the only constructor available in SQLServerBulkCopy took in a file path. This was kind of awkward since if you wanted to use bulk copy with data you had in memory, you either had to write it to a file only to read it back, or extend the class and write a constructor that could take in an InputStream. Luckily in version (I think 6.3) Microsoft finally added a new constructor with support for InputStream.

The following is a simple example in Java showing how to use SQLServerBulkCopy to write to a table in SQL Server.

Create a table in your SQL Server database:

CREATE TABLE [dbo].[SqlBulkInsertExample](
	[Id] [int] NOT NULL,
	[Name] [varchar](100) NOT NULL 
) ON [PRIMARY]
GO

Create a Java project. If using Maven, you can add the following dependency to your pom.xml file:

<dependency>
    <groupId>com.microsoft.sqlserver</groupId>
    <artifactId>mssql-jdbc</artifactId>
    <version>6.4.0.jre8</version>
</dependency>

Here is a simple Java example application showing how to take some data, convert it to a CSV, and save it to the database using bulk insert:

import com.microsoft.sqlserver.jdbc.SQLServerBulkCSVFileRecord;
import com.microsoft.sqlserver.jdbc.SQLServerBulkCopy;
import com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions;

import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.Types;
import java.util.HashMap;
import java.util.Map;

public class SqlBulkCopyExample {

  private static String JDBC_URL = "jdbc:sqlserver://localhost;DatabaseName=Playground;InstanceName=SQL2016";
  private static String USERNAME = "client";
  private static String PASSWORD = "client";
  private static String TABLE_NAME = "dbo.SqlBulkInsertExample";

  public static void main(String[] args) {
    try {
      // Create some data to insert into our database table
      Map data = new HashMap();
      data.put(1, "John Smith");
      data.put(2, "Steve Smith");
      data.put(3, "Molly Smith");

      // We are going to build a CSV document to use for the bulk insert
      StringBuilder stringBuilder = new StringBuilder();

      // Add table column names to CSV
      stringBuilder.append("id, name\n");

      // Copy data from map and append to CSV
      for (Map.Entry entry : data.entrySet()) {
        stringBuilder.append(
                String.format("%s,%s\n", entry.getKey(), entry.getValue()));
      }

      byte[] bytes = stringBuilder.toString().getBytes(StandardCharsets.UTF_8);
      try (InputStream inputStream = new ByteArrayInputStream(bytes)) {

        // Pass in input stream and set column information
        SQLServerBulkCSVFileRecord fileRecord = new SQLServerBulkCSVFileRecord(
                inputStream, StandardCharsets.UTF_8.name(), ",", true);

        fileRecord.addColumnMetadata(1, "id", Types.INTEGER, 0, 0);
        fileRecord.addColumnMetadata(2, "name", Types.VARCHAR, 0, 0);

        try (Connection connection = DriverManager.getConnection(
                JDBC_URL, USERNAME, PASSWORD)) {

          // Set bulk insert options, for example here I am setting a batch size
          SQLServerBulkCopyOptions copyOptions = new SQLServerBulkCopyOptions();
          copyOptions.setBatchSize(10000);

          // Write the CSV document to the database table
          try (SQLServerBulkCopy bulkCopy = new SQLServerBulkCopy(connection)) {
            bulkCopy.setBulkCopyOptions(copyOptions);
            bulkCopy.setDestinationTableName(TABLE_NAME);
            bulkCopy.writeToServer(fileRecord);
          }
        }
      }
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}

Now if you login to your database and run the following select:

SELECT * FROM dbo.SqlBulkInsertExample

You should see that you have data!
Query_Results

I hope that helps!

January 27, 2018

Cassandra – Getting Started with Java

Filed under: Cassandra — Brian @ 9:15 pm
Tags: , ,

Cassandra is a great tool for storing time series data and I happen to be using it on my current project for that exact purpose.

There are several ways to use Cassandra from Java and many ways to improve performance, but here I just want to provide a simple “Getting Started” example. So here it is!

First, download the current version of Cassandra V3 from here.

Extract the tar.gz file:

 tar -zxvf apache-cassandra-3.11.1-bin.tar.gz
 

Change directory into the bin folder:

 cd apache-cassandra-3.11.1/bin
 

Start Cassandra:

 ./cassandra -f
 

Create a Java project, if using Maven, you can add the following dependencies to your pom.xml file:

<dependency>
    <groupId>com.datastax.cassandra</groupId>
    <artifactId>cassandra-driver-core</artifactId>
    <version>3.3.0</version>
</dependency>

Here is a simple Java example showing how to connect to Cassandra, create a keyspace, create a table, insert a row, and select a row:

import com.datastax.driver.core.*;

import java.time.Instant;
import java.time.ZoneId;
import java.util.Date;
import java.util.UUID;

public class CassandraV3Tutorial {

    private final static String KEYSPACE_NAME = "example_keyspace";
    private final static String REPLICATION_STRATEGY = "SimpleStrategy";
    private final static int REPLICATION_FACTOR = 1;
    private final static String TABLE_NAME = "example_table";

    public static void main(String[] args) {

        // Setup a cluster to your local instance of Cassandra
        Cluster cluster = Cluster.builder()
                .addContactPoint("localhost")
                .withPort(9042)
                .build();

        // Create a session to communicate with Cassandra
        Session session = cluster.connect();

        // Create a new Keyspace (database) in Cassandra
        String createKeyspace = String.format(
                "CREATE KEYSPACE IF NOT EXISTS %s WITH replication = " +
                        "{'class':'%s','replication_factor':%s};",
                KEYSPACE_NAME,
                REPLICATION_STRATEGY,
                REPLICATION_FACTOR
        );
        session.execute(createKeyspace);

        // Create a new table in our Keyspace
        String createTable = String.format(
                "CREATE TABLE IF NOT EXISTS %s.%s " + "" +
                        "(id uuid, timestamp timestamp, value double, " +
                        "PRIMARY KEY (id, timestamp)) " +
                        "WITH CLUSTERING ORDER BY (timestamp ASC);",
                KEYSPACE_NAME,
                TABLE_NAME
        );
        session.execute(createTable);

        // Create an insert statement to add a new item to our table
        PreparedStatement insertPrepared = session.prepare(String.format(
                "INSERT INTO %s.%s (id, timestamp, value) values (?, ?, ?)",
                KEYSPACE_NAME,
                TABLE_NAME
        ));

        // Some example data to insert
        UUID id = UUID.fromString("1e4d26ed-922a-4bd2-85cb-6357b202eda8");
        Date timestamp = Date.from(Instant.parse("2018-01-01T01:01:01.000Z"));
        double value = 123.45;

        // Bind the data to the insert statement and execute it
        BoundStatement insertBound = insertPrepared.bind(id, timestamp, value);
        session.execute(insertBound);

        // Create a select statement to retrieve the item we just inserted
        PreparedStatement selectPrepared = session.prepare(String.format(
                "SELECT id, timestamp, value FROM %s.%s WHERE id = ?",
                KEYSPACE_NAME,
                TABLE_NAME));

        // Bind the id to the select statement and execute it
        BoundStatement selectBound = selectPrepared.bind(id);
        ResultSet resultSet = session.execute(selectBound);

        // Print the retrieved data
        resultSet.forEach(row -> System.out.println(
                String.format("Id: %s, Timestamp: %s, Value: %s",
                row.getUUID("id"),
                row.getTimestamp("timestamp").toInstant().atZone(ZoneId.of("UTC")),
                row.getDouble("value"))));

        // Close session and disconnect from cluster
        session.close();
        cluster.close();
    }
}

If you would like to look at the data in your local Cassandra database, you can use the CQLSH command line tool.

So from the bin folder type:

 ./cqlsh

This will take you to a “cqlsh>” prompt.

To view all available Keyspaces:

 DESCRIBE KEYSPACES;

You will now see our “example_keyspace” database:

 system_schema system system_traces
 system_auth system_distributed example_keyspace

To switch to that Keyspace:

 USE example_keyspace;

To show all tables in the keyspace:

 DESCRIBE TABLES;

You will be shown all tables which includes “example_table”.

Now from the command line you can view the data in the table by using a select statement:

 select * from example_table;

Which will show the following information:

 id | timestamp | value
 --------------------------------------+---------------------------------+-------
 1e4d26ed-922a-4bd2-85cb-6357b202eda8 | 2018-01-01 01:01:01.000000+0000 | 123.45

I hope that helps!

Note: The documentation on the DataStax website is very good.

November 21, 2016

VirtualBox – Resize a CentOS Virtual Drive

Filed under: CentOS,Linux,VirtualBox — Brian @ 9:51 pm
Tags: , , , ,

Recently I created a CentOS VM using VirtualBox for some development work I was doing, but foolishly I set the virtual disk to the default size of 8 gigs, figuring it would be easy to resize it later.

Well, after piecing together bits of information from several places I was finally able to get my virtual disk resized as well as my CentOS partition that was on it. Here are the steps that worked for me!

Expand VirtualBox Drive

The first thing you need to do is expand your virtual drive which I did using the VBoxmanage command line tool provided by VirtualBox (reference).

Note: Ensure virtual box root folder is on your path (e.g., C:\Program Files\Oracle\VirtualBox).

Resize the partition with the command VBoxmanage to the desired size in megabytes (e.g., here I am resizing the drive to 20 gigs):

VBoxmanage modifyhd MyCentosDisk.vdi --resize 20000

Resize Partition with GParted

Download the gparted live cd.

Attach the gparted live cd as a drive to your virtual machine and make it the “Primary Master”.

Note: Ensure the checkbox “Live CD” is selected when you add the CD drive.

Now start your VM.

The gparted UI will load. Now resize the parition by expanding it to use the available free space on the virtual disk.

Shutdown your VM, remove the gparted live cd drive and set your virtual disk to once again be the “Primary Master” drive.

However, after you restart your VM you will notice that CentOS is still not aware of the additional space:

df -h

Expand CentOS Partition

Now you will need to expand your Linux partition to use the free space on the drive.

First check the sizes of the current partitions and get the name:

vgdisplay -v

On my system the name of the partition to expand was “/dev/centos/root”.

Extend the volume with the available free space:

lvextend -l +100%FREE /dev/centos/root

Now you are ready to perform the online resize:

xfs_growfs /dev/centos/root

NOTE: resize2fs works on other distributions of Linux but not on CentoOS, so you must use xfs_growfs (reference).

I hope that helps!

April 13, 2016

SQL Server – Get Sizes of All Tables and Indexes in a Database

Filed under: SQL,SQL Server — Brian @ 9:51 pm
Tags: , , , ,

Even though there are great tools in SQL Server Management Studio that give you a wealth of information about all of your tables and indexes, I still find it handy to have a script I can use to quickly check the current state of all tables and indexes in a database.

Size of each Table (Including Indexes)

This query gives you a total size for each table in KB including all of the indexes on that table. The query shows the table name, row count, total space, and total space used by the table and its indexes.

SELECT
    t.[Name] AS TableName,
    p.[rows] AS [RowCount],
    SUM(a.total_pages) * 8 AS TotalSpaceKB,
    SUM(a.used_pages) * 8 AS UsedSpaceKB
FROM sys.tables t
INNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
WHERE t.is_ms_shipped = 0 AND i.OBJECT_ID > 255
GROUP BY t.[Name], p.[Rows]
ORDER BY t.[Name]

Size of each Index

This query shows the total size in KB of each index in the database. The query shows the name of each index, which table the index is on, and the total size of the index.

SELECT 
    i.[name] AS IndexName,
    t.[name] AS TableName,
    SUM(s.[used_page_count]) * 8 AS IndexSizeKB
FROM sys.dm_db_partition_stats AS s
INNER JOIN sys.indexes AS i ON s.[object_id] = i.[object_id] 
	AND s.[index_id] = i.[index_id]
INNER JOIN sys.tables t ON t.OBJECT_ID = i.object_id
GROUP BY i.[name], t.[name]
ORDER BY i.[name], t.[name]

I hope you find these queries useful!

April 3, 2016

Jetty – Setup Jetty on CentOS 7

Filed under: Jetty,Linux — Brian @ 9:50 pm
Tags: , , , ,

I recently setup Jetty on a CentOS 7 Linux server. Previously we had been running Jetty in a windows environment but now we are moving our servers over to Linux. Here are my setup notes, I hope they help!

Install JDK

Note: We are using the Oracle JDK (Not openJDK). Check for the latest version of JDK 1.8.

Download and install the JDK:

wget --no-cookies --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u111-b14/jdk-8u111-linux-x64.rpm" -O /opt/jdk-8-linux-x64.rpm

yum install /opt/jdk-8-linux-x64.rpm

Set JAVA_HOME and add it to your PATH:

Create a file in /etc/profile.d to store java settings:

sudo touch /etc/profile.d/java.sh

vi /etc/profile.d/java.sh
Add the lines:

export JAVA_HOME=/usr/java/latest
export PATH=$JAVA_HOME/bin:$PATH

Install Jetty Web Server

Note: Check for the latest version of Jetty.

Download and install Jetty:

wget http://central.maven.org/maven2/org/eclipse/jetty/jetty-distribution/9.3.14.v20161028/jetty-distribution-9.3.14.v20161028.tar.gz

tar -zxvf jetty-distribution-9.3.14.v20161028.tar.gz

mv jetty-distribution-9.3.14.v20161028 /opt/jetty-9.3.14

ln -s /opt/jetty-9.3.14 /opt/jetty

Note: creating a soft link for your /opt/jetty install will be very useful later when you want to upgrade your version of Jetty. All you will need to do to upgrade is just download the tar.gz of the newest version, unpack it, and point the soft link to the new folder.

Create a temp folder for Jetty to unpack war files:

mkdir /opt/temp

Create webappbase folder where your apps will run:

mkdir /opt/webappbase

Move start.ini and webapps into the webappbase folder:

mv /opt/jetty/webapps /opt/webappbase/webapps

mv /opt/jetty/start.ini /opt/webappbase

Create a “jetty” user that Jetty will run under:

useradd -m jetty 

Update files and folders to be owned by “jetty” user:

chown -RH jetty:jetty /opt/jetty
  
chown -R jetty:jetty /opt/webappbase
  
chown -R jetty:jetty /opt/temp

Configure Jetty Web Server

Setup Jetty as a service:

ln -s /opt/jetty/bin/jetty.sh /etc/init.d/jetty

Add the new Jetty service to be managed by chkconfig:

chkconfig --add jetty

Set Jetty service to run on startup for the following run levels:

chkconfig --level 345 jetty on

Create a jetty settings file:

sudo touch /etc/default/jetty

You will need to set TMPDIR, JETTY_BASE, and JETTY_HOME. Also, any JAVA_OPTIONS you need to set can be set in the settings file for your Jetty server.

Add jetty settings by editing the jetty default settings file:
vi /etc/default/jetty

TMPDIR=/opt/temp
 
JETTY_BASE=/opt/webappbase
 
JETTY_HOME=/opt/jetty

JETTY_USER=jetty
 
export JAVA_OPTIONS="-Xms128m -Xmx2048m -server -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"

If you want to change the default port number, you can do so by editing the start.ini file:
vi /opt/webappbase/start.ini

# Replace 
# jetty.http.port=8080
# With 
jetty.http.port=8081

Start Jetty Server

Switch to the “root” user, then start the Jetty service:

sudo su
service jetty start

Verify the service has started successfully:

service jetty status

Test that the server is running:

curl localhost:8081

Deploy a War file

To deploy an application to Jetty, all you need to do is copy a WAR file to the folder:

/opt/webappbase/webapps

That’s all!

January 5, 2016

SQL Server – How to write an Upsert using MERGE

Filed under: SQL,SQL Server — Brian @ 8:35 pm
Tags: , , , ,

Normally, when you want an application to INSERT a row to a table if it does not exist or UPDATE it if it does exist, your application must first do a SELECT to check if the row exists, which is the standard SELECT-UPDATE-INSERT pattern. The down side to this pattern is it requires two database round trips instead of one.

Now, over the years I have worked with document stores such as MongoDB and really enjoyed the freedom to be able to make a single call to the database and be able to ADD/REPLACE a document, without having to check if it exists.

Fortunately in SQL Server 2008, the MERGE function was introduced.

MERGE allows you to make only a single database round trip when you want to INSERT a row if it does not exist, or UPDATE it if it does. The following is a simple example showing how to use the MERGE statement.

Quick Start

So if you just want a quick example to get you started then here you go. In this example the table “ClientData” is the one being updated.

  • MERGE – specifies the table we will be inserting a row into or updating
  • USING – defines the condition we will be using to check if the row exists or not
  • WHEN MATCHED THEN – SQL statement to run when the row exists
  • WHEN NOT MATCHED – SQL statement to run when the row does not exist
MERGE dbo.ClientData AS [Target] 
USING (SELECT 12345 AS clientId, 'Some' AS data) AS [Source] ON [Target].clientId = [Source].clientId 
WHEN MATCHED THEN UPDATE SET [Target].data = [Source].data, [Target].updatedDateUtc = GetUtcDate() 
WHEN NOT MATCHED THEN INSERT (clientId, data) VALUES ([Source].clientId, [Source].data);

How it Works

First lets create a table to use for our test of the Merge statement:

CREATE TABLE dbo.ClientData(
	ClientId [bigint] NOT NULL,
	Data [varchar](20) NOT NULL,
	UpdatedDateUtc [datetime] NOT NULL DEFAULT (getutcdate()),
 CONSTRAINT [PK_ClientData_ClientId] PRIMARY KEY CLUSTERED (
	ClientId ASC
)) ON [PRIMARY]
GO

You can verify the table has been created and see that it is empty:

SELECT * FROM dbo.ClientData

Now, run the following Merge statement for the first time, where no matching row in the table:

MERGE dbo.ClientData AS [Target]
USING (SELECT 12345 AS clientId) AS [Source] 
ON [Target].clientId = [Source].clientId
WHEN MATCHED THEN  UPDATE SET [Target].data = 'Update', [Target].updatedDateUtc = GetUtcDate()
WHEN NOT MATCHED THEN  INSERT (clientId, data) VALUES ([Source].clientId, 'Insert');

As you can see, the INSERT statement was executed:

SELECT * FROM dbo.ClientData

Now let’s run the exact same merge statement a second time and see what happens:

MERGE dbo.ClientData AS [Target]
USING (SELECT 12345 AS clientId) AS [Source] 
ON [Target].clientId = [Source].clientId
WHEN MATCHED THEN  UPDATE SET [Target].data = 'Update', [Target].updatedDateUtc = GetUtcDate()
WHEN NOT MATCHED THEN  INSERT (clientId, data) VALUES ([Source].clientId, 'Insert');

Now, you can see the the UPDATE statement was executed since the “data” field has been updated to the text “Update”:

select * from dbo.ClientData

If you are curious about the performance difference between MERGE and SELECT-INSERT-UPDATE here is a performance comparison.

January 2, 2016

Java – Simple GZIP Utility to Compress and Decompress a String

Filed under: Java — Brian @ 10:15 pm
Tags: , , ,

I wanted to have a simple utility class in our app so we could easily compress a String to a byte[] when our REST API received a GET request with the header “Accept-Encoding: gzip” and also be able to decompress a byte[] to a String when we received a PUT with the header “Content-Encoding: gzip”. So I wrote a simple utility class to GZIP a String to a byte[] and to unzip a GZIP byte[] to a String.

So here is a simple GzipUtil class:

import java.io.*;
import java.nio.charset.StandardCharsets;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;

public class GzipUtil {

    public static byte[] zip(final String str) {
        if ((str == null) || (str.length() == 0)) {
            throw new IllegalArgumentException("Cannot zip null or empty string");
        }

        try (ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream()) {
            try (GZIPOutputStream gzipOutputStream = new GZIPOutputStream(byteArrayOutputStream)) {
                gzipOutputStream.write(str.getBytes(StandardCharsets.UTF_8));
            }
            return byteArrayOutputStream.toByteArray();
        } catch(IOException e) {
            throw new RuntimeException("Failed to zip content", e);
        }
    }

    public static String unzip(final byte[] compressed) {
        if ((compressed == null) || (compressed.length == 0)) {
            throw new IllegalArgumentException("Cannot unzip null or empty bytes");
        }
        if (!isZipped(compressed)) {
            return new String(compressed);
        }

        try (ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(compressed)) {
            try (GZIPInputStream gzipInputStream = new GZIPInputStream(byteArrayInputStream)) {
                try (InputStreamReader inputStreamReader = new InputStreamReader(gzipInputStream, StandardCharsets.UTF_8)) {
                    try (BufferedReader bufferedReader = new BufferedReader(inputStreamReader)) {
                        StringBuilder output = new StringBuilder();
                        String line;
                        while((line = bufferedReader.readLine()) != null){
                            output.append(line);
                        }
                        return output.toString();
                    }
                }
            }
        } catch(IOException e) {
            throw new RuntimeException("Failed to unzip content", e);
        }
    }

    public static boolean isZipped(final byte[] compressed) {
        return (compressed[0] == (byte) (GZIPInputStream.GZIP_MAGIC)) && (compressed[1] == (byte) (GZIPInputStream.GZIP_MAGIC >> 8));
    }
}

Also here are a set of JUnit tests for this utility:

import org.junit.Test;
import static org.junit.Assert.*;

public class GzipUtilTest {

    @Test(expected = IllegalArgumentException.class)
    public void zip_shouldThrowIllegalArgumentException_whenStringToCompressIsNull() {
        GzipUtil.zip(null);
    }

    @Test(expected = IllegalArgumentException.class)
    public void zip_shouldThrowIllegalArgumentException_whenStringToCompressIsEmpty() {
        GzipUtil.zip("");
    }

    @Test
    public void zip_shouldGzipString_whenStringIsNotEmpty() {
        String xml = "<Hello>World</Hello>";

        byte[] actual = GzipUtil.zip(xml);

        assertTrue(GzipUtil.isZipped(actual));
    }

    @Test(expected = IllegalArgumentException.class)
    public void unzip_shouldThrowIllegalArgumentException_whenByteArrayToDecompressIsNull() {
        GzipUtil.unzip(null);
    }

    @Test(expected = IllegalArgumentException.class)
    public void unzip_shouldThrowIllegalArgumentException_whenByteArrayToDecompressIsEmpty() {
        GzipUtil.unzip(new byte[0]);
    }

    @Test
    public void unzip_shouldReturnInputByteArrayAsString_whenByteArrayContentIsNotGzipped() {
        String xml = "<Hello>World</Hello>";
        byte[] bytes = xml.getBytes();

        String actual = GzipUtil.unzip(bytes);

        assertEquals(xml, actual);
    }

    @Test
    public void unzip_shouldDecompressByteArrayGzippedContent() throws Exception {
        String xml = "<Hello>World</Hello>";
        byte[] compressed = GzipUtil.zip(xml);

        String actual = GzipUtil.unzip(compressed);

        assertEquals(xml, actual);
    }

    @Test
    public void isZipped_shouldReturnFalse_whenContentIsNotGzipped() {
        byte[] bytes = new byte[] {1,2,3};

        boolean actual = GzipUtil.isZipped(bytes);

        assertFalse(actual);
    }

    @Test
    public void isZipped_shouldReturnTrue_whenContentIsGzipped() {
        byte[] bytes = GzipUtil.zip("1,2,3");

        boolean actual = GzipUtil.isZipped(bytes);

        assertTrue(actual);
    }
}

I hope you find this useful!

August 14, 2015

Java – Automate database schema updates with Flyway

Filed under: Java — Brian @ 1:44 pm
Tags: , , , , , ,

I am currently working on a Java 8 project which is a REST API deployed as a WAR file to Jetty. Our deploy process is very simple, our deploy pipeline just copies the WAR file into the Jetty directory in each environment then verifies the app is up and running with the correct version and runs some integration tests.

We wanted to be able to apply database migration scripts automatically in each environment (Dev, Test, QA, Staging, Prod) as we did our deploy, so we would no longer have to worry about manually applying scripts. In the past for Java, Scala, and .NET projects I have used several different tools, but for this project we decided to use Flyway which is very flexible and simple to setup.

The documentation for Flyway is excellent, however I decided to just post what we did in our app in case it might help someone else out. Here is our “Quick Start” setup.

1. Add the Flyway dependency

Flyway can be setup using Maven, Gradle, SBT, Ant, etc. In our project we used Maven, so all we did was add the following to our pom.xml file:

<dependency>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-core</artifactId>
    <version>3.2.1</version>
</dependency>

2. Code

Now, to get Flyway to check for any database changes to apply, whenever the application is started, put the following code somewhere in your applications startup sequence.

Flyway flyway = new Flyway();
//Create the dbo.schema_version table if it does not already exist
flyway.setBaselineOnMigrate(true);
//Where the method "getDataSource()" provides your DataSource 
//object that has the jdbc url, username, and password.
flyway.setDataSource(getDataSource()); 
flyway.migrate();

3. SQL Script Location

All migrations scripts by default must go in the following folder in your app:

/resources/db/migration

4. SQL Script Naming convention

Scripts are run in version number order based on their names, the default naming convention is:

Version__description.sql
For example: “V1_0_1__create_tables.sql”

All scripts must start with the letter “V”, followed by major/minor version numbers, with two underscores “__” separating the version from the description.

5. Schema Version Table

Flywaydb will automatically create a table in each database called “dbo.schema_version” which stores a log of all migration scripts that have been applied.

The table looks like:

version_rank installed_rank version description type script checksum installed_by installed_on execution_time success
1 1 1 Flyway Baseline BASELINE Flyway Baseline NULL JohnSmith 2015-07-30 15:55:49.337 0 1
1 2 1.0.1 create tables SQL V1_0_1__create_tables.sql -440358290 JohnSmith 2015-07-30 15:55:49.337 109 1

6. Handling Failure

If a script fails, the app will fail to start and the failure information is written to our log files as well as a monitor is tripped. Flywaydb applies each script as a transaction so all changes in the script will be rolled back if any part of the script fails. This is very handy because if you commit a script with some invalid SQL syntax, all you have to do is update the script with the corrected syntax, commit it again, and let the build pipeline apply the changes from the fixed script. No messy cleanup or reset to worry about.

So that is it, you should have all you need to get your database schema changes easily synced up with the deploy of your app!

June 7, 2015

Console2 – Mac(ish) terminal on Windows using Console2 and UnixUtils

Filed under: Windows — Brian @ 6:52 pm
Tags: , , , , ,

If you are like me and you are used to using a terminal window on a Mac, using the Windows command prompt can be a bit frustrating. I like having basic functionality like tabs, short cuts to switch tabs, cut and paste with Ctrl+c and Ctrl+v, dragging the corner of a terminal to resize, all of which I take for granted on my Mac. Fortunately, Console2 and UnixUtils can help (with a little customization).

Console2 is an open source project that provides a tabbed version of the windows console app. However, it is much more than that. Even though the default settings are less than desirable, Console2 is highly customizable. So you can easily change the defaults to have all kinds of amazing features such as “select to copy”, Ctrl+c to copy, Ctrl+v to paste, drag the corner of the window to resize it, short cuts to open new tabs, switch tabs, etc. Yep, you can make it function like a normal command prompt!

Download Console2

Download the zip file, extract it, and run Console2.exe.

Console2

So far it does not look like much of an improvement, but don’t worry, we have some customizing to do.

Go to Edit -> Settings

“Console” – set the Buffer size rows to something larger, such as 9999:
2-GeneralConsoleSettings

“Appearance” – customize the font, font size, and text colour. I like Consolas, 12 point font, and a nice green text colour to remind me of the default on my Mac.

3-SetFontTypeSizeAndColour

“Appearance -> More” – hide the toolbar and status bar, ensure tabs are displayed, and set some cool transparency.

4-Appearance-More

“Behaviour” – ensure “Copy on select” is checked. This will allow you to select text on the console window and have it automatically put on the clipboard (Woohoo!).

5-Behaviour-CopyOnSelect

“Hotkeys” – set the “New Tab 1” hotkey to use Ctrl+T.

6-HotKeys-SetNewTab

“Hotkeys” – set “Copy Selection” to Ctrl+c.
7-HotKeys-SetCopy

“Hotkeys” – set “Paste” to Ctrl+v.
8-HotKeys-SetPaste

“Hotkeys -> Mouse” – set “Copy/clear selection” to “None”.
9-HotKeys-Mouse-SetCopySelection

“Hotkeys -> Mouse” – set “Select text” to “Left”.
10-HotKeys-Mouse-SetSelectText

“Hotkeys -> Mouse” – set “Paste” to “Right”.
11-HotKeys-Mouse-SetPasteText

“Hotkeys -> Mouse” – set “Context menu” to “None”.
12-HotKeys-Mouse-SetContextMenu

“Tabs” – select “Console2” and set style to “XTerm”.
13-Tabs-Console2-XTerm

Now, here is where we get to the really cool part of Console2. Besides just having a windows console tab, you can also create tabs for any other console apps such as Cygwin, Gitbash, and PowerShell.

To show you how to setup a second type of command prompt, lets use PowerShell as an example.

“Tabs” – select “Add”
14-Tabs-AddNew

Enter a name for the new Console window (“PowerShell”), set the path to the PowerShell executable, and add an icon.
— %SystemRoot%\syswow64\WindowsPowerShell\v1.0\powershell.exe

15-Tabs-AddPowerShellTab

Now you will see both “Console2” and “PowerShell” in the list of types of available tabs.

Of course, also install UnixUtils as well to make your life a little easier. To install UnixUtils, all you have to do is download the zip file, unzip it somewhere, such as “C:\UnxUtils” then add to your path “C:\UnxUtils\bin”. That is all, then you should have all of your favorite commands such as “ls” and “grep”.

Finally, a somewhat functional command prompt!

16-Done

Enjoy!

Next Page »

Blog at WordPress.com.