Sunday 28 December 2014

Need to exclude referral spam from Analytics reports

It turns out that some unusual referral URLs in my Blog's Google Analytics reports a few weeks ago were not a one off anomaly by a sneaky spammer.  If I followed the referral URL I got forwarded on to Amazon with some affiliate details - so they'd get a kickback for any purchases I made.

Today I see over a hundred visits from the Russian city Samara, along with large numbers of referrals from domains that have nothing in common with the content of my blog.

I'm contemplating moving the blog to another platform so that I can have greater control over the behaviour for blocking dodgy referrers.  For the time being I will have to settle for adding some filters to the Google Analytics reports as these dodgy referrers come in - so far just blocking one city is sufficient.

I would have hoped that the spammer would realise that my blog does not publish a list of backlinks to sites that have linked to my posts, so there is no value in "visiting" from his dodgy sites.

Wednesday 10 December 2014

Microservices vs Mechanical Sympathy

At what point does it make sense to split some functionality out to run in a separate process to be called upon over HTTP?

When the only driver is the single responsibility principle, I'm looking for a term that means the opposite of premature optimisation - premature over complication has a nice ring to it.

Why don't I like this?  Let me enumerate the ways:
  • Additional network IO
  • Additional processes
    • requiring memory
    • occupying CPU cores
  • Placing additional load on networking infrastructure
  • A potential point of failure
In addition to those runtime overheads, we have some development and deployment considerations:
  • Another artefact to deploy
  • Another service to test
  • Another stream in the continuous delivery pipeline

Tuesday 2 December 2014

Mystical Secret to Speeding up Software Development - Stop Faking It

A few months ago we developed some functionality based on an unsupported non-production-ready service.

Unsurprisingly we are now in the process of re-writing the functionality to use a real implementation.

Hot tip for reducing the amount of time to deliver a software project - don't build on something that isn't going to exist when your system will be going live.

Coincidentally we have moved away from inserting fake data into our databases in development, QA and production environments.  Fake data doesn't belong in our live systems.

Our application doesn't need to insert data into the system, so we no longer need our data access permissions to have inappropriate permissions.

Wednesday 12 November 2014

Docker Hub Group permissions

As part of a story in a recent development iteration at work, I found myself guiding two development teams in the use of Docker Hub.

Strangely my current project had already had an indirect dependency on Docker Hub for a few months, but today was the first time that we actually needed to consider the ins and outs of this particular system.

In this case the "in" side involves our continuous deployment system pushing an image, and the "out"side involves a separate system pulling an image from the same repository.

In the interests of ensuring that each project team does not run the risk of accidentally blowing away anything or everything that the other project teams have set up, I took it upon myself to learn the ins and outs of group permissions in Docker Hub.

Now instead of just having one big group of Owner users associated with our organisation we have two additional groups for my current project - one that contains users with the right to push images, and one with the right to read (pull) images.


Monday 10 November 2014

Strange day in the office

My plan to wait for other teams to sort out the approach to deploying applications in Docker didn't work out quite the way I had hoped.

This morning I guided some colleagues to get our application set up to run from a Docker image, without worrying about the details of deploying anywhere - expecting that another development team would have that sorted out from their project first.

Sure enough, come mid-afternoon we'd established that the other development team didn't need that functionality yet so we'd have to figure it out for ourselves.

Okay, we have a tools engineering team that have already provided something Dockerised automagically through Dockerhub so they'll be able to show us how to do it in no time.

It turned out that the approach used so far involved Dockerhub pulling the Docker image content directly from a Git repository.  This doesn't quite suit our needs, as our application needs to be built from source first.

The ugly option involved setting up a dedicated Git repository for our binary artifact, which Dockerhub could pull across.

Back to the drawing board.

An hour or so later I overheard my colleagues mentioning something about authenticating to Dockerhub from Gradle.  It turns out that the plugin they've used for creating a Docker image can also be applied to push to a Docker repository.

Don't worry, there's more fun ahead to get this set up in the continuous deployment pipeline.

Wednesday 22 October 2014

Degrees of freedom in software

One of my current projects involves developing a new application to act as a backend setup service.

For licensing reasons this particular application cannot be deployed into our cloud environment - so I've been evaluating what options are open from various perspectives.

  • Operating System
    • Fixed to the same as other services running from the same machine.
  • Language level
    • Possibly tied to the same as existing applications - unless we can tweak chef provisioning to support multiple versions (maybe simple?).
  • Application server
    • Hopefully completely flexible, which should allow us to use the embedded approach.
  • Dependency injection framework
    • Hopefully completely flexible, although the main library dependencies are very Spring oriented.
    • Slight preference for UtterlyIdle as that matches the other components.
    • UPDATE: Due to the tight coupling of a third party library, a selection of Spring framework dependencies are bundled in the application and will be required at runtime.
  • Build system
    • Relatively flexible, but easiest with something that plays nicely with dependency management - leaning towards Gradle rather than Maven.
  • Continuous Integration and deployment
    • Need to decide between Go and Jenkins.
    • Likely to involve scp and shell scripts.
  • Monitoring and logging
    • Although this environment will have easier file system access than our cloud setup we should log to the logging service.
    • We could expose JMX profiling for monitoring of memory usage etc. on our internal monitoring infrastructure.




Wednesday 10 September 2014

.london domain name

After living in London for almost six years, I'm now on my very own .london domain name.

Tuesday 26 August 2014

Running Docker on Mac

Tonight's mission - get an application from the day job up and running on one of my Macs at home...

So, Docker is designed to run on top of a Linux kernel - which Mac OS X obviously doesn't offer.

A quick Google leads to:
http://docs.docker.com/installation/mac/

which directs me to Boot2Docker :
https://github.com/boot2docker/osx-installer/releases

Open the downloaded package and double click to install the VM image

Binaries are now on the path and ready to run from a command line terminal:

> boot2docker init
> boot2docker start

Then the top secret bit for pulling down the Dockerised application...

Tweak the configuration properties to use the newly created Docker host instead of the hand-coded Linux environment's Docker configuration (host IP address).

Run the application's tests.

Let out a satisfied "Aaah" or "Mmm" (optional).

This all took about 20 minutes, but most of the time was spent waiting on downloads.

Tuesday 22 July 2014

Generating an application jar using Gradle 2

In an earlier post I mentioned that my latest work project encountered some surprises when trying to bundle an application and all of its dependencies into one big jar.

Last week a colleague tried using a different Gradle plugin to generate the jar more quickly, as that had been a noticeable bottleneck in the build process.  He encountered more problems and reverted back to fatjar.

This week I stumbled across the shadowJar Gradle plugin.  From all of the description and sample configuration it looked like nearly a drop in replacement for the fatjar plugin.

Some colleagues agreed that it looked quite promising and tried it out.  It worked fine on one of our applications, but resulted in runtime failures on the other one so they abandoned this build optimisation activity.

After a little compare and contrast I found that the size of the files in META-INF/services differed.  The jar generated with the fatjar plugin had some larger files than the jar generated with the shadowJar plugin.

Tracing back through the project dependencies indicated that the missing content was from the Jersey client artifacts.  It soon became apparent that the duplicate file names across two jars wasn't intended for one to be a substitutable implementation of the other, so I looked into merging.

Sure enough, by default the fatjar plugin merges the contents of service files when generating the jar, but shadowJar does not.

A single line configuration update resulted in shadowJar producing a working jar - with a complete definition of the services.

Tuesday 15 July 2014

Why Optional in Java 8 is not Serializable

So far in my exploration of Java 8 I have encountered a couple of "why'd they do it like that?" discussions.

One particularly contentious new class is java.util.Optional.

The reasoning behind not implementing Serializable boils down to discouraging developers from misusing the concept.  It's not intended to be stored as a field value, but rather to act as a temporary representation when returning a potentially null value from a method.

See the relevant JDK 8 Developers mailing list discussion for the range of perspectives.

I suspect my current team and I may have already misused Optional - time for some refactoring...

Unexciting update: I managed to find 0 offensive usages of Optional in the current project's codebase.


Thursday 3 July 2014

If the code looks weird - it probably is

I have a good habit of examining code changes when I synchronise my codebase with the latest changes from version control.

Yesterday I noticed a one liner which included an unintuitive chaining of calls. I asked my colleague about it and we briefly jumped around the code and saw a passing functional test which supposedly indicated that all was well, and moved on.

A couple of hours later I decided to try out the application through a web browser and observed that the application would fail on any request.

I went back to the questionable code and realised that the class that contained it was not covered by any unit tests.

Half an hour or so later the component in question had its own unit tests and an additional half dozen lines of code to make it perform its intended purpose.

As with many things in life, with the benefit of hindsight I realise that I should have paid more attention to when my spidey sense told me our use of the API didn't look right.

Sunday 22 June 2014

Validation handling for Post-Redirect-Get

There are plenty of descriptions of the Post-Redirect-Get pattern online, but I am bit surprised and disappointed to see that most descriptions do not bother to cover a validation path.

With the obvious exception of search forms, most of the forms that I encounter online or develop in my day job will involve some input validation so I am taking it as not being a moot point.

As a toy application we can think of a web application which prompts the user to enter their date of birth and then displays back some information about that date - it could be the person's current age or a mash up of other people born on the same day and important events for that date, use your imagination.

Let's consider the application as consisting of two views:

  • input form to accept the day, month and year
  • display interesting information about the date specified through the form
In this setup the Post-Redirect-Get pattern would work as follows:
  • Web browser HTTP GET request results in display of html page with the input form view.  The input form specifies method as POST and action as the appropriate form processing action on the server.
  • A form submission results in an HTTP POST request being sent to the server including any form values that have been populated.
  • In the Happy Days scenario a valid date has been specified, the server side action performs any necessary calculation or lookup and redirects the browser to a GET resource which will duly present the second view.  To avoid having to consider passing any state with the redirect this example application might embed the date into the resource URL - /doDateStuff/{yyyy}/{mm}/{dd} might work as a suitable resource URL pattern.

Now, what about the situation where an invalid date has been entered?  For example, 29 February 2015 is an invalid date because 2015 is not a leap year.

My preferred approach to this is to allow the POST request to provide a response with the input form view along with the validation error message(s).  In this path there is no redirect to a GET resource.  If the browser refreshes then the form submission will be repeated and the validation will fail again and the input form will show the error(s) afresh.

An alternative approach that I have never used in my 14 or so years of professional web development treats the Post-Redirect-Get approach as applying to validation.  In my opinion this approach involves additional complexity for carrying the validation state across to the request that the browser will send when it receives the redirect response.  The use of a short-lived cookie seems to be the hack for this.

Taking the non Happy Days scenario further, what happens if the user refreshes the invalid form?
  • In my preferred approach the browser will probably detect that a POST request is going to be repeated and present a dialogue asking if the user really wants to do that.  If the user accepts the warning and continues with the repeat submission then the form post is repeated, the validation is applied by the server and the same input form view response with the same validation error messages will show.
  • With the short-lived cookie approach the GET request will go to the server and the input form view will be rendered without the old error messages and without the values showing in the form  - because the short-lived cookie has been disposed of.


Tuesday 17 June 2014

Quick fixes - the road to pain

After adding some new dependencies to our application we discovered that the Cloud Foundry deploy would no longer work.  The error related to duplicate files being found in the jar file's manifest folder.

A pair of developers did some investigation, changed some config and "fixed" the issue - great.

Later in the day another pair decided to try actually making the application execute some of the new code's functionality in the cloud - just some relatively trivial calls to a RESTful service.  The unit tests had all worked in development and the continuous build pipeline so surely there won't be any problems?


BANG!
java.lang.NullPointerException
    at javax.ws.rs.core.MediaType.valueOf

For our application this was a runtime exception being thrown from a dependency of a dependency.

To cut a long story short, if your application needs to be deployed as one uberjar, don't blindly exclude configuration files from the resultant META-INF directory structure.  Some systems expect and require configuration to be in place.

For my team I expect this to act as some motivation to set up some automated smoke tests that will interact with the web application inside the Cloud Foundry environment.

Tuesday 10 June 2014

Java 8 Resources

This page mainly exists as a set of bookmarks for myself.

Java Magazine March/April 2014

Oracle Java tutorial
Online book
Third party blog posts
Videos

Monday 9 June 2014

Deploying a Play application into Cloud Foundry

A week or so ago some colleagues gave a brief introduction to the local Cloud Foundry environment.

One of the main takeaways for me was that our application would need to be able to bind to a TCP port specified at runtime rather than have a statically defined one from a config file.

We spent some time looking into how to tell Play to listen on a provided port, and what might be involved in setting up a Cloud Foundry manifest file - then decided to just try deploying with something that we expected to fail.

We were pleasantly surprised to discover that the Java buildpack included enough logic to detect our application as being a Play application and looked after the port binding for us.

It wasn't a completely smooth process, as we did have to update the JDK version in the buildpack - which involved forking the Cloud Foundry github repository.  Three lines of text changes was enough to get it up and running.






Wednesday 28 May 2014

Google driven design - support for client side includes


During a recent project inception I was surprised to see just how many use journeys were expected to start from a Google search - probably more than half.

With that in mind, I have found myself starting to question any approach that involves a super fast initial page load followed up by Javascript calls to populate the rest of the page - "what will Google see?"

According to this old blog post Google's crawler will happily recognise an AJAX request and call on it to pull through the content, and presumably index it as part of the page.

I love it when a plan comes together (even when it's not my plan).

Evaluating Play Framework 2.3 with Java 8

RTFM
Like many software libraries and frameworks, I have found that the best way to get the most out of Play is by putting some time towards reading the documentation.

For example, we wanted to evaluate how test friendly Play is, so we tried to figure out a mechanism for dependency injection of services into our controllers.

The sample code wasn't very helpful for this, as the controllers were implemented with static methods for each action, but thankfully the online documentation offered a section on dependency injection which showed how we can control the obtaining of a controller instance.

Scala bias
One aspect that was less than ideal is the slight Scala bias.  For example, the documentation that I found for dependency injection was titled ScalaDependencyInjection and only had sample code implemented in Scala.  Fortunately a lot of our team's existing codebase is already in Scala, so this hasn't proven to be a barrier.

Java 8
The main Java 8 specific feature that I have made use of so far is for calls to map from a WSResponse to something suitable for the controller to include in an HTTP response.

Thankfully the Play documentation includes sample code for Java pre and post version 8, so we can appreciate what the verbose version would have looked like while still in the process of getting our heads into lambdas and functional composition.

Tuesday 27 May 2014

A Java Refresher - some pain points and annoyances

With the recent release of Java 8 my new project team has chosen to take a look into Java as a viable alternative to the existing default language choice of Scala.

(In the interests of full disclosure we did originally opt for Scala, with a few caveats and a recommended reading list for the features to avoid or use sparingly).

I had already started reading Functional Programming in Java 8, but I think now is a good opportunity to look back over the features of the language and consider where there are particular strengths and weaknesses.

This post will only focus on Core Java - not Java Enterprise Edition, Swing or any frameworks.
  • Labeled break / continue
    • These expressions are the closest you can get to having a goto in Java
  • Checked Exceptions
    • I haven't quite convinced myself about this one yet, but I have a particular dislike for having to wrap a call with a try / catch block where the catch block only contains a comment like // can never happen
    • All too often exceptions are used to control flow for scenarios that aren't truly exceptional
  • Reflection
    • Generally prevents the compiler from having static access to the relationships between classes.
    • On the flip side, static analysis tools can be expected to make use of reflection.
  • Local classes
    • Did you know that you can declare a class inside any block?  Not just anonymous classes, but even fully fledged named classes.
  • instanceof
    • Whenever I find myself introducing an instanceof check in my code I feel like there must be a better way.
  • Restrictions on Generics

Runtime gotchas
  • Memory allocation
    • Out of memory exception - particularly PermGen space, and particularly when reflection or dynamic class loading is involved - before Java 8.
  • Garbage collection
    • Stop the world - collecting large accumulated garbage while blocking other progress.

Sorry for the lack of substance, but sometimes this blog exists as a checklist of reminders for myself.  I haven't Googled a problem and reached my own blog yet, but I can imagine it will only be a matter of time.

Sunday 18 May 2014

One team's success is another team's "challenge"

I've deleted most of the body of this post, as it was too detailed and might have come across as a rant.

Here is the essence of what I want to express, feel free to wave your hands around in the gesticulation style of your choosing...

Team A shouldn't talk to Team B about their problems handling an accumulated issue with Team C's kit.  (I'll categorise my position as being in Team D in this scenario).

The keywords from the longer version include: evolving technologies, neglected maintenance, versioning of software, Chinese whispers, rumours, this is why we can't have nice things.

Friday 9 May 2014

Specialisation and interdependence leads to a higher standard of living

The title of this particular post doesn't tie as closely to the content as you might normally expect.  So, no SEO bonus points for me today - tsk tsk, shame on me!

The background to the title is a little memory game which my fourth form Economics teacher used to try to trick our class into committing to memory something which each student expressed as a takeaway from the year.  I tried to come up with a longwinded statement that my classmates might forget when their turn came to repeat back what each previous person had said.  The class was a bit too lenient, so by the time someone missed out half of my statement nobody cared to flag it up as a mistake.

Anyway, with that 23 year old useless information out of the way, let's get on with the interesting stuff.

Working in a large company with separate IT teams becoming dependent on each others' services, there are varying opinions about the who, when and how of implementing new features.

The current proposed approach for one of the services that I am associated with involves allowing each team to have access to the service's source code and apply the changes that are needed for that team.

On the surface this seems to make some sense, as the team making the code changes will have the most knowledge about the needs of the current project.

With the benefit of hindsight, some of the senior developers are coming to the realisation that a dedicated product team with specialist knowledge of the complexity behind the service might be better suited to develop enhancements.  (Actually, we thought this from the start, but anyway....)

In my opinion, the technologies and esoteric configuration details involved for this particular service mean that there is too much overhead involved for it to be efficient for each project team to develop for their own required enhancements.  The follow on overhead is that there has to be some team responsible for maintaining any and all features - so they have to learn all about that anyway.

Without going into too much detail, here is a high level list of the context switches required for my project's team members to work on enhancements to the service that we want to consume from:

  • programming language
  • specialised JavaScript libraries
  • build system
  • artifact repository
  • continuous integration system
  • virtualisation configuration
When I have some free time, I might take a look into what others have been posting for their no projects philosophy, as I expect that may have some crossover.

Tuesday 29 April 2014

Automation considered harmful

Intro
I've had a particularly unproductive couple of days in the office so far this week.  Two quite separate projects which my current project depends on have had enough of an impact to prevent me from testing code changes, or checking in merged content.

We are still investigating one of the problems, so I will just focus on the problem that has been fully identified and resolved, saving the continuous integration issue for another post.

Background
I have a development machine which sits under my desk and acts as a local content management system server.  I use this to try out functionality and present demonstrations without fear of scheduled or unscheduled downtime.

To ensure that this system doesn't fall out of sync with new functionality that is being developed by another team, I periodically call on the automation magic of chef to obtain the latest binaries and configuration.

A week or so ago the chef update failed partway through.  This wasn't a major problem as the application would still run, however I was not in a position to identify the cause of the problem or how to fix it.

The problem
This week I decided to try the chef update again, as a colleague had agreed to address the earlier problem.  This seemed like a good idea at the time, but resulted in the chef update failing - and leaving the content management applications unresponsive.

Here we go again I thought, except now it's a higher priority issue for me as this week's development needed this all to be running.

Thankfully the remote team responsible for managing the chef configuration had some availability to look into this issue.  Unfortunately the individual involved didn't seem to have enough context to appreciate my non-virtualized setup etc.  So it was time for me to take another dig around the chef-managed setup.

Ultimately it came down to the usual diagnostic approach of checking the log files for the various applications involved.  One of the content management server processes failed to initialise with a duplicate column error.

Like many modern extensible software products, this particular content management system automatically manages the structure of an underlying relational database.  When new properties need to be represented, a developer can specify that in a configuration file - which will ultimately trigger an alteration to a database table.

Tracing back through git commits showed up what was special about this duplicate column - it was actually a special case of renaming the column with a different - non-camel - casing.

Summary
I wouldn't criticise any of the technological approaches involved:

  • It made sense for the content management system to flag up the unusual database structure change
  • It made sense to use chef for managing updates to the binaries and configuration

It just feels quite strange that I have started off simply being a consumer of a web application or web service with my own local installation, but ended up having to delve into multiple relational databases to rename some columns.

To tie back to the cheesy considered harmful title, using tools such as chef without understanding what they are doing or without having access to examine their effect can result in problems and delays.  This ties back to a concept that I keep coming across sometimes you don't know what you don't know.

Tuesday 22 April 2014

Sample Java code for obtaining an authorisation token for your Twitter application

Here is an example of how to obtain an authorisation token from Twitter, using your registered application's API key and API secret.

The relevant REST API documentation can be found at: https://dev.twitter.com/docs/api/1.1/post/oauth2/token

For the HTTP client I have made use of the ever popular Apache HTTP client (v4).

The response should include a JSON document containing a property keyed by access_token.

There are a few potential gotchas involved with including this in a multithreaded application, so I'd recommend that you read Twitter's documentation carefully.


        final String basicAuthentication = TwitterCredentials.API_KEY + 
                ":" + TwitterCredentials.API_SECRET;
        String base64EncodedAuthentication = Base64.getEncoder().encodeToString(basicAuthentication.getBytes(StandardCharsets.UTF_8));

        HttpClient client = new DefaultHttpClient();

        HttpHost httpHost = new HttpHost("api.twitter.com", 443, "https");

        HttpPost httpRequest = new HttpPost("https://api.twitter.com/oauth2/token?grant_type=client_credentials");

        Header authenticationHeader = new BasicHeader("Authorization", "Basic " + base64EncodedAuthentication);
        httpRequest.addHeader(authenticationHeader);
        httpRequest.addHeader("Content-Type", "application/json; charset=utf-8");

        HttpResponse httpResponse = client.execute(httpHost, httpRequest);

        HttpEntity entity = httpResponse.getEntity();

        String responseBody = EntityUtils.toString(entity);

Twitter referrals in Google Analytics

Twitter has its own short url for tweets, which will show up in Google Analytics reports as referrer starting with http://t.co

To trace back to the tweet we can use Twitter's search with the url as the search term.

I recently tweeted a link to a blog post, and sure enough the subsequent day included a referrer from a URL below http://t.co

The search result can be seen from:

https://twitter.com/search?q=http%3A%2F%2Ft.co%2FcqWPRVyJ9l

Chances are that this will stop working by the time you get around to reading this, as the search time range is rather limited (a few days?)

Monday 14 April 2014

Getting up and running with Google Analytics Core Reporting API 3.0

Update: Google have recently improved their documentation so it should be straight-forward to get set up:  Core Reporting API - Developer Guide


Back in late 2013 I struggled to get anything to work with Google's Java client code for version 3 of their analytics API.

The sample code and documentation seemed to be somewhat lacking compared to what was then available for version 2.

A week or so ago I decided that it might be worthwhile to have another crack at making something work.  After much trial and error I now have some code which will successfully send requests to and receive responses from Google's Analytics service.

Before you get too excited, I'll just add the caveat that this particular setup involves the application triggering a browser to open so that a Google account associated with the Analytics account can be prompted to authorise the access to the data.

If this code isn't useful to you, then so be it but I may find myself Googling for this information in the future so here goes.

Dependencies
- at least one includes the dreaded alpha in the version, so expect it to require updating in the future:
  • com.google.apis:google-api-services-analytics:v3-rev83-1.17.0-rc
  • com.google.api-client:google-api-client-jackson2:1.17.0-rc
  • com.google.api.client:google-api-client-auth-oauth2:1.2.3-alpha
  • com.google.oauth-client:google-oauth-client-java7:1.16.0-rc
  • com.google.oauth-client:google-oauth-client-jetty:1.17.0-rc
Sample code for authentication
Specify your own CLIENT_ID, CLIENT_SECRET and APPLICATION_NAME values.

        HttpTransport transport = new NetHttpTransport();

        JsonFactory jsonFactory = new JacksonFactory();

        GoogleClientSecrets secrets = new GoogleClientSecrets().set(
                "client_id", CLIENT_ID).set("client_secret", CLIENT_SECRET);

        HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
        JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();

        DataStoreFactory dataStoreFactory = new MemoryDataStoreFactory();

        GoogleAuthorizationCodeFlow.Builder builder = 
                new GoogleAuthorizationCodeFlow.Builder(httpTransport, jsonFactory, 
                        CLIENT_ID,
                        CLIENT_SECRET,
                        Collections.singleton(AnalyticsScopes.ANALYTICS_READONLY));

        GoogleAuthorizationCodeFlow flow = builder.setDataStoreFactory(dataStoreFactory).build();

        final Credential credential = new AuthorizationCodeInstalledApp(flow, 
                new LocalServerReceiver()).authorize("user");

        Analytics analytics = new Analytics.Builder(transport, jsonFactory, credential)
                .setApplicationName(APPLICATION_NAME).setHttpRequestInitializer(
                        new HttpRequestInitializer() {
                    @Override
                    public void initialize(HttpRequest httpRequest) throws IOException {
                        credential.initialize(httpRequest);
                        httpRequest.setConnectTimeout(30000);
                        httpRequest.setReadTimeout(60000);
                    }
                }).build();

Sample code for request on API
Specify your own profileId value.

        Analytics.Data data = analytics.data();

        Analytics.Data.Ga ga = data.ga();

        // External search
        Analytics.Data.Ga.Get get = ga.get("ga:" + profileId,
                "2013-01-01",              // Start date.
                "2013-12-31",              // End date.
                "ga:pageviews")  // Metrics.
                .setDimensions("ga:keyword").setSort("-ga:pageviews");

        GaData results = get.execute();

Do what you like with the results object - probably some iterating :)


What next
Because my intended use of the API will ultimately involve some server side application, I will need to come up with a different approach to authentication.

Let me know if you found this post useful.

Game logic

During a job interview a few years ago I was asked how I would go about implementing a simple game of noughts and crosses.

I was fine with defining the objects, but when it came to the logic for determining when someone had won I found myself overthinking the situation, considering pre-loading the system with won states and using some kind of map lookup, or navigating the game state with some kind of tree of neighbouring cells based on the last move and following a backtracking algorithm.

In the case of Noughts and crosses there are only 8 possible winning combinations per player, with each combination involving checking the state of 3 positions on the playing grid.

A brute force approach would be to check every possibility until finding a win or running out of possibilities.

A more elegant solution would only consider the combinations that involve the most recently played move. This would introduce a requirement of being able to determine which winning states are related to the new move's grid position.


* Footnote: The interview was so long ago that I don't recall the company - or the interview.  I found this post awaiting publication so have updated the wording accordingly.

Playing with Activator

I've had a bit of time to myself at work recently, so have taken the opportunity to download and try out the latest available technology stack from my department's preferred provider as a test client for some services that I've been developing.

So far I have found Typesafe Activator to be a surprisingly usable rapid prototyping system.

There are a few typos in the documentation and the occasional minor background process failure indicates that it is still a work in progress, but I have managed to develop a simple web application which consumes a JSON/REST web service with about 20 lines of code (excluding imports).

This has been my first time trying to use Scala and Play for anything potentially useful in real life, so I was pleasantly surprised at how productive the development cycle was - within a web browser.

Being able to add some code, see the syntax colouring highlight any newbie errors, trigger a compile and then hit my controller with a request without leaving the browser was borderline fun.

For my current project this stack should be sufficient, but once my Scala confidence gets a bit higher I expect to move on to trying out Akka and Spray.