Redshift JDBC Drivers in Groovy Scripts

Recently I needed to use a Groovy Script that would make a JDBC connection to Redshift. I was running into the following errors:

java.lang.RuntimeException: Error grabbing Grapes -- [unresolved dependency:; not found]

My Grapes annotation looked like this:

@Grapes([@Grab(group='', module='redshift-jdbc42', version='')])

The root of the issue is that the repository for this is hosted at mulesoft not Maven. To Resolve simply add a @GrabResolver. The following Grapes annotations in my script resolved the issue:

@GrabResolver(name='mulesoft', root='')
@Grapes([@Grab(group='', module='redshift-jdbc42', version='')])

Java 6 For OSX 10.15 Catalina

Those of us who work with Legacy versions of Java are familiar with the dance of installing OS upgrades and reinstalling Apple Java 6. Well this time around it gives a message that a newer version is installed and won’t let you proceed with install. Of course we can’t hang with that so here’s a nice little bash script solution…this will download the file from Apple CDN, mount it, modify some booleans using sed and then drop a package on your desktop that will then let you install with out the OS version check.

cd ~/Downloads
rm -f javaforosx.dmg
hdiutil mount ~/Downloads/javaforosx.dmg
pkgutil --expand "/Volumes/Java for macOS 2017-001/JavaForOSX.pkg" ~/tmp
hdiutil unmount "/Volumes/Java for macOS 2017-001"
sed -i '' 's/return false/return true/g' ~/tmp/Distribution
pkgutil --flatten ~/tmp ~/Desktop/Java.pkg
rm -rf ~/tmp

Strange 301 Error AWS CLI S3 copy

Recently I was setting up a new region and had some automation in my config management to configure the region portion of the .aws config on a Linux box. When I attempted to pull an asset from the S3 bucket I got the following error:

A client error (301) occurred when calling the HeadObject operation: Moved Permanently

This is a really misleading error message. the actual problem here is that the region in the AWS config doesn’t match the region that the bucket lives in. You can solve this a few different ways but the easiest is to either pass the region in your aws s3 cp command or if you are only using buckets and assets from that same region you can update your region= line of the config.