Whoah! It’s been awhile since my last post. In that time SimpliFit launched the SimpliFit Weight Loss Coach for Android and iOS, the SimpliFit Coaching program, and, just last week, Magical, a texting-based picture calorie tracker. Yes, we’ve been quite busy! And it’s given me a lot more content for future blog posts.

Today, though, I’ll build on my previous post, Validating iOS In-App Purchases With Laravel, by covering Android in-app purchases (IAPs) in Laravel. The process was completed for a Laravel 4 app, but the code I’ll be showing can be used in Laravel 5 as well.

The workflows between iOS and Android are fairly similar, and just as working with iOS IAPs was frustrating due to lackluster documentation, working with Android IAPs is just as frustrating. If Apple and Google would take the best parts of each of their IAP systems, you would get quite a good system. As it stands, though, both systems make you wish your app didn’t have IAPs.

But if you’re reading this, then you probably have the unenviable task of adding IAP verification to your app.  First, I’d recommend you get familiarized with Google’s In-app Billing documentation and skim the Google Play Developer API. Also, I’m assuming that you’ve already created your app in your Google Play Developer Console and added the in-app products you’ll be offering.

On the front-end, we are again using the Cordova Purchase Plugin to mediate between our app and Google Play via Google’s In-App Billing Service. Since the SimpliFit app only had a monthly subscription (we since made the app free), I’ll be discussing how to work with in-app auto-renewing subscriptions, however this guide can easily be applied to a one-time IAP.

Android IAP Workflow

Just as with iOS, we have three IAP stages: 1) Retrieve product information, 2) Request payment, and 3) Deliver the product.

Stage 3 was the most involved for Laravel for iOS IAPs, and for Android the steps increase. Take a look:

Android In-App Purchase Flow

  1. The Android app requests the list of products from Laravel
  2. Laravel returns the list of product identifiers currently available for purchase
  3. The Android app sends these product identifiers to In-App Billing Service
  4. The In-App Billing Service requests product information based on product identifiers
  5. The Play Store returns production information (title, price, etc.)
  6. The In-App Billing Service returns the production information from the Play Store
  7. The Android app displays the products to the user
  8. The user selects a product to purchase
  9. The Android app requests payment for product
  10. The In-App Billing Service prepares for a purchase by requesting the user’s Google Wallet password
  11. The user enters his/her password
  12. The In-App Billing Service sends the purchase request to the Play Store
  13. The Play Store processes the purchase and returns a purchase receipt
  14. The In-App Billing Service sends the receipt to the Android app
  15. The Android app sends the receipt data to Laravel for validation
  16. Laravel records the receipt data to create an audit trail
  17. Laravel Authenticates itself with Google’s API servers using a Service Account
  18. Google authenticates the Service Account and returns an access key
  19. Laravel uses the access key to query the user’s purchase from the receipt’s purchase token
  20. The Play Store locates the purchase and returns a purchase resource
  21. Laravel reads the purchase resource and verifies the purchase
  22. Laravel unlocks the purchased content and notifies the Android app

As you can see, steps 1-16 are identical to the iOS IAP workflow. However, to communicate with Google’s API server, your server must authenticate itself first using OAuth. Thankfully, Google has a PHP library we can pull into Laravel to simplify things. We’ll get to that shortly.

NOTES:

  • Steps 1 and 2 can be accomplished via hardcoding the product identifiers in the Android app, just as in the iOS IAP workflow, rather than requesting them from the server. I strongly recommend making the extra call to get the products from the server–it’s a small hit to the loading time and allows you to modify products without having to update the app.
  • Steps 10 & 11 will not occur if the user made a purchase with his/her Google Wallet within the last 30 minutes.
  • Just as with iOS, steps 17-22 are not necessary but are highly recommended to verify purchases. If you are working with subscriptions, in particular, this process should be required. We never had someone try to fake a subscription in iOS, but we did have attempts on Android. If we had not had receipt verification on our server, those users would have received full access to our app as if they had paid.

Retrieving Product Information and Requesting Payment

I don’t want to repeat myself, so please check my previous blog post for iOS on these two stages of the IAP process. I’m using the exact same models in Laravel as in the iOS workflow.

If you need guidance on setting up in-app products, check out Google’s documentation on Administering In-app Billing. We chose to make our product identifier for our subscription the same in Android and iOS to help us keep things more organized (i.e., com.example.app.productName).

And just as you can create test users in iOS to test your IAPs, Google allows you to grant gmail accounts test access in the Google Play Developer Console under Settings->Account details. Two things to note here, though:

  1. The owner of the developer account cannot purchase products from him/herself and thus cannot be a test user.
  2. As of November 2013, Google had no method to test in-app subscriptions, only one-time in-app purchases. This was a MAJOR blunder on Google’s part. We ended up having to make actual subscription purchases and refund them to test the full flow. But now it does seem that Google has added the ability to test subscriptions, according to their Testing In-app Billing documentation, though I have not tested this yet.

Delivering Products

On a successful purchase, the Play Store will return a receipt to the Android app, which, through the Cordova Purchase Plugin, is sent via JSON in the following format:

NOTE: This receipt structure is specific to the Cordova Purchase Plugin. If your app uses a different method to access the Google In-App Billing Service API, the receipt structure may differ. Refer to Google’s In-App Billing API for more information on receipt data.

Before we start breaking this receipt down, note that the “receipt” sub-parameter is a JSON object in string format. I suspect this is just the way the Cordova Purchase Plugin processes the Play Store receipt.

Breaking down the receipt by parameter, here’s what we have:

  • The “type” specifies this is an “android-playstore” purchase (as opposed to “ios-appstore” for an iOS purchase)
  • The “id” parameter is the Google Wallet Merchant Order Number.
  • The “purchaseToken” uniquely identifies a purchase for a given item and user pair and is generated by Google.
  • The “receipt” parameter contains a string with the JSON receipt. This contains:
    • “orderId”: the same identifier as “id” above
    • “packageName”: your Android app’s package name
    • “productId”: the identifier of the product which the user purchased
    • “purchaseTime”: the time the purchase was made in milliseconds since Epoch
    • “purchaseState”: the state of the order (0 = purchased, 1 = canceled, 2 = refunded)
    • “purchaseToken”: the same token as in “purchaseToken” above
  • The “signature” parameter contains the signature of the purchase data signed with the developer’s private key. This can be used to verify, in the mobile app itself, that the receipt came from Google if you do not want to use the server verification method.

Great, we have a receipt! Let’s move on to storing the data to create an audit trail. Before we do, let me remind you that I use Laracast’s commander pattern in Laravel 4. As such, all the code below is from my StoreTransactionCommandHandler class and sub-classes. If I were to do the same in Laravel 5, all the logic from the handler class would simply be added to the command class itself.

Store the receipt data

Now that we have both iOS and Android IAPs available to users, and since the purchase receipt handling is different for the two platforms, we need to know from which platform the incoming purchase came. Luckily, our mobile app sends a Client-Platform header with every request to our server, which indicates android or ios. So we’ll switch between our Android and iOS receipt handling logic based on that header.

For Android receipts, the data that is of most interest to us is the “receipt” sub-parameter, which is a string of JSON data. So first, let’s get that into a format we can work with, then we store it:

What I do here is grab the receipt from the command and decode the “receipt” string JSON into an associative array called $receipt. Then I store this receipt as a pending transaction.

Since my iOS guide, I’ve added a new error flow here, where I check to see if the purchaseToken parameter is present before sending it on to be saved into the database.

Note: At this point in the iOS workflow, we also set the URL endpoint to which we send receipt data for validation. For Android, however, Google does not offer a sandbox URL endpoint. If you want to test  your IAPs, take a look at the Testing In-App Billing documentation.

Allow server access to the Google API

Before being able to validate a receipt with Google’s servers, we need the necessary data to authenticate with Google’s servers.

First, go to the Google Play Developer Console API Access Settings. Under “Linked Projects”, if you already have a project listed, select “Link” next to the existing project (if it isn’t already linked). If there are no existing projects, you’ll need to create a new project by clicking “Create new project”. This creates a new linked project titled “Google Play Android Developer”.

Next we need to create a service account to access the Google API. This service account is in effect a “user” that has permissions to access your Google Play Developer resources via the API. To create the service account, click the “Create Service Account” button at the bottom of the page. This will open a modal with instructions for setting up a service account. Go ahead and follow the instructions. When you’re on the step of creating the client ID, be sure to select “P12 Key” as the key type.

Screen Shot 2015-06-08 at 10.01.43 PM

Once you click “Create Client ID” in the modal, the page will automatically download the P12 key file and will display the key file’s password. This key file will need to be accessed by your server to authenticate with Google but should not be accessible publicly. Once the modal closes, you’ll see the service account’s information. The only relevant piece we need (other than the downloaded key file) is the account’s email address.

Screen Shot 2015-06-08 at 4.02.42 PM

We also need to enable the service account to use the Google Play Developer API. In the left-hand side menu, under “APIs & auth”, click “APIs” to view all the individual service APIs accessible via the Google Play Developer API.

Screen Shot 2015-06-09 at 8.13.58 AM

Then under “Mobile APIs”, select “Google Play Developer API”. On the Google Play Developer API page, select “Enable API”.

Screen Shot 2015-06-09 at 8.16.04 AM

Now the service account has access to the Google Play Developer API. If your server needs access to other APIs, find the appropriate APIs on the previous page and enable them.

Go back to the Google Play Developer Console and click “Done” in the open modal. The page will refresh and you should see your service account listed. Now just click “Grant Access”, and you’ll be asked to select the role and permissions for the service account. Since the server will only be getting information on existing IAPs, select only the “View financial reports” permission and click “Add User” (if you want your server to do more with the API, select the relevant permissions you’ll need).

Make the Request

Now that we have the necessary account details and permissions to access the Google API, we’re ready to make the request to Google. To help us with communicating with Google’s server, let’s pull in the Google PHP Client package to Laravel via the composer.json file:

"google/apiclient": "1.1.4"

As of this publication, the latest version of the Google PHP Client package is 1.1.4. According to the API Client Library for PHP, the Google PHP library is still in a beta state. This means that Google may introduce breaking changes into the library. The good news is that the API itself is in version 3, so that will likely remain stable for some time. For these reasons, I suggest stating the specific version of the client library you want to stick with.

With that package pulled in (after running composer update), here’s my request code with comments to guide you through it:

As you can see, the only data needed from the receipt when making the request is the purchaseToken value (no need to build a JSON object as in the iOS process). The final line (the get on purchases_subscriptions) also includes the extra authentication step noted in the Android IAP Workflow section. If that authentication step fails, a Google_Auth_Exception is thrown. I wrapped the entire code in a try/catch so that if the call results in said exception, we throw our own exception. We then handle our exception at a higher level.


The mystery of the unnecessary P12 key file password

As a quick aside, you may have noticed that we didn’t pass in the key file’s password when creating the assertion credentials. In fact, we don’t need to use that password we received with the key file at all. If you take a look into the PHP client library on GitHub at line 57 of the AssertionaCredentials.php file, which contains the Google_Auth_AssertionCredentials class, you’ll see that the __construct function’s fourth argument is the key file’s password, and the default value is “notasecret”. As of this writing, all service account P12 key file passwords are “notasecret”. If that is no longer the case when you read this article, simply add the password as a fourth argument when creating the assertion credentials.


 

A valid subscription response will be in the form of:

This is detailed in the Android Publisher API documentation of the Purchases.subscriptions resource.

An error response will look like:

If you’re validating a one-time product purchase receipt instead of a subscription, change the purchase_subscriptions get request to:

$product - $service->purchases_products->get($packageName, $productId, $purchaseToken);

The valid product response contents are detailed in the Purchases.products resource documentation.

Validate the response

Awesome! We have a response from Google. Let’s see if the response indicates that the receipt we received from the app was valid:

The code above is specific to validating a subscription response. And since it is a subscription, we need to check if the subscription has already expired. This compares the current UTC time to the UTC time created from the subscription response. If you’re server’s default timezone is not set to UTC, then you will need to convert the expiration time to your timezone or convert the current time in your timezone to UTC time.

Store the validated receipt and start the subscription

Almost done! Knowing we have a valid subscription receipt, we store the transaction information in our database, add an active subscription for the user to the database, and then respond OK to the mobile app’s receipt validation request to indicate the user’s purchase completed properly and was indeed valid.

Closing Notes

Validating in-app purchases for Android or iOS is a lengthy process, and, as I mentioned, the documentation isn’t always clear and laid out in a logical step-by-step manner. Hopefully this guide helps you implement the process in your own app. This should also give you a good start to implement other features, such as canceling/refunding subscriptions via the app (rather than having to do this through the Google Wallet Merchant Center).

Regarding subscriptions specifically, unlike for iOS (at least in our case), an Android subscription does have a time length (for us it was one month) and will auto-renew. There are ways to set up webhooks through the Google Play Developer API Console, and perhaps you can set one specifically for listening if a user’s subscription doesn’t auto-renew or a user cancels it via the Play store. I found that the simpler solution was to just check with Google, at the subscription period’s expiration date and time, if the subscription is still active. You can use the same purchaseToken as in the original receipt (another reason to store that receipt) and follow the same process to make another call to the Purchases.subscriptions API resource. If the subscription was successfully renewed, then the expiration time in the response will again be in the future, meaning the user has been successfully charged for another subscription term. When this happens, the order number is slightly altered after each successful subscription renewal after the initial purchase:

12999556515565155651.5565135565155651 (base order number)
12999556515565155651.5565135565155651..0 (first recurrence orderID)
12999556515565155651.5565135565155651..1 (second recurrence orderID)
12999556515565155651.5565135565155651..2 (third recurrence orderID)

This is detailed on the In-app Subscriptions page in the Subscription Order Numbers section. I keep track of the number of renewals via a counter column in my subscriptions table. And if I ever would have a need to get a specific recurrence of a subscription, I could construct that recurrence’s orderID from the renewal counter and the original order ID.

And a note on using the Google Play Developer Android Publisher API: Google lays out some best practices for using the API efficiently:

  • If you’re publishing apps via the API, limit publish calls to one per day,
  • Only query purchase status at the time of a new purchase,
  • Store purchase data on your server, and
  • Only query subscription status at the time of renewal.

These are good practices to follow, as your developer account does have a request quota per day (200,000 API requests per day “as a courtesy”). Also, this helps decrease the number of HTTP requests between your server and Google, speeding up server responses to your mobile app.

And one more thing…

On Twitter, @dror3go asked about dealing with restoring a purchase if a user gets a new phone:

It’s an interesting question, and there are three ways I can think of to deal with this:

  1. Have your server control all in-app features, including anything unlocked via purchases. For our app, whenever the user opened the mobile app, the app validated an encrypted API token in the app’s local storage with the Laravel server. During that validation, the server would also check if the associated user’s subscription was still active. If not, the server would return a specific error, and the app would throw up a blocking modal requiring the user to purchase or renew their subscription. Otherwise the server would respond with a 200 status code and the user continues to use the mobile app without any restrictions. The downside of this strategy is that this check will happen every time the user opens the app (unless the app was still active in the background), but this also means that if the user moves to a new phone or even switches to a different phone OS, the app will still work and all of the user’s purchases come with the user.
  2. As in #1, store the in-app features on the server, but also store that information in local storage on the phone. With this strategy, you will need to encrypt the data, so that someone cannot tamper with the data and give themselves a paid feature. If a user switches phones, the server will need to recognize that situation and instruct the app to enable the features for which the user paid. The disadvantage here is that now the mobile app itself may need logic to determine if a subscription is still active, but the server now has less processing to do.
  3. And finally, the last strategy is to have the mobile app control all in-app purchase product restoration. No (or limited) purchase data is stored and processed on the server. Instead, the app stores purchase data in local storage. And if the user switches phones, the app will have to have a way to restore these purchases. It would do this by checking if there is anything saved in local storage, and if not, it would have to do a call to the Google Play Store to see if the user has purchased any of the products for the app. In theory this is possible, though I have not investigated this strategy myself.

My vote is for strategy #1. This offloads as much logic from the client onto the server. Theoretically, this means the client app will run faster, and if the server begins to use more resources, we can easily scale using AWS.

Laravel on AWS Elastic Beanstalk

How to use this guide

This guide will walk you through setting up a Laravel development environment on Elastic Beanstalk.

Before using Elastic Beanstalk, I was using a shared hosting account, and I got fed up with outdated packages and the lack of admin privileges. My goal with this guide was to create a dev server that closely mirrored my intended production environment in Elastic Beanstalk (another post on that coming soon). This is the exact setup I use for SimpliFit’s API dev server.

What is Elastic Beanstalk?

First, a quick overview. Amazon Web Services’ Elastic Beanstalk is a Platform as a Service (PaaS), allowing developers to deploy applications without the hassle of detailed server infrastructure, such as server provisioning or scaling to meet demand.

Elastic Beanstalk uses the following AWS products:

  • Elastic Compute Cloud (EC2)
  • Simple Storage Service (S3)
  • Simple Notification Service (SNS)
  • CloudWatch

Elastic Beanstalk can also manage an AWS Relational Database Service (RDS) instance, however, for a Laravel application, this is not a preferred solution. When an RDS instance is created by and associated with an Elastic Beanstalk environment, it will also be terminated (and thus all data lost) when that environment is terminated. This will be further discussed in a later section.

Prerequisites

Prior to continuing with this guide, ensure the following prerequisites are met:

Create an RDS instance

For our Laravel application, we will be using a MySQL database, so we must first create a MySQL RDS instance in the RDS Management Console.

1. Select Engine

Select the mysql engine.

Select RDS Engine

2. AZ Deployment

Select, as required by your application, whether you want to use Multi-AZ Deployment for this database.

Since we’ll be using this Elastic Beanstalk application for development only (and we’d like to stay within the free tier usage), we’ll select No here. A post in a few weeks will cover this process for a production environment.

Mutli-AZ Deployment

3. Specify DB Details

Now you get to set the instance specifications and settings for your DB. For a development environment, we will use the following settings:

Specify DB Details

To determine the right settings for your Laravel application, it’s best to test various environments to see which fits your needs. As long as you only have only one RDS instance running at a time, you’ll fall within the free usage tier.

4. Advanced Settings

Unless you have created a separate VPC for your application, select the default VPC and default DB Subnet Group. Set this DB to be publicly accessible and select an Availability Zone. This is the AZ that you will want to also launch your Elastic Beanstalk EC2 instances in.

Finally, give your database a name and click Launch DB Instance. Launching the instance may take some time, but luckily we can complete the next steps during this time.

Configure Advanced Settings

Generate Security Credentials

Note: If you already have an Access Key ID and Secret Access Key for your account, skip this section.

In the Identity and Access Management (IAM) console, you will need to create new security credentials for your account.

Note: You need to have access rights to the IAM console. If you are not the owner/admin of the AWS account, you will need to request that security credentials are created for your account.

Navigate to the “Users” section in the left-hand menu. Select your username and then select the “Security Credentials” accordion. Click on “Manage Access Keys”.

In the lower right-hand corner, click “Create Access Key”.

Click “Show User Security Credentials” to view the credentials you just created. Keep this tab open, as this data will be needed in a later step. Alternatively, you can download your credentials as a .csv file to store until needed.

Create Key Pair

To give us the ability to SSH into the EC2 instances that are created as part of an Elastic Beanstalk environment, if the need arrises, we first need to create a key pair in the EC2 Management Console.

Select “Key Pairs” under “Network & Security” in the left-hand menu, and click “Create Key Pair”.

Create Key Pair

Give your Key Pair a name to describe it and click “Create”.

Your Key Pair is downloaded as a .pem file. You’ll need this file if you need to SSH into your EC2 instances, so don’t lose it!

Install the Eb CLI

The simplest method to interact with Elastic Beanstalk may be the GUI interface in the AWS Management Console, however the eb command line interface (CLI) offers many of the same features to deploy applications quickly and easily from your computer, especially when using a git workflow.

Download the eb tool and extract to a folder of your choosing on your computer. Add the following path to your PATH variable:

  • For Windows: <path to unzipped EB CLI package>/eb/windows
  • For Linux/Unix: <path to unzipped EB CLI package>/eb/linux/python2.7/

Initialize Elastic Beanstalk Application

Open command prompt/terminal and navigate to the root directory of your Laravel installation (and git repo).

Type the following command:

eb init

You will now be prompted to enter your AWS Access Key ID and AWS Secret Key, which were generated in a previous step. If you’ve already run eb init on this computer, your previous keys will already be pre-populated; just hit enter to select them.

Next you will choose the service region for you Elastic Beanstalk application. Choose the same region in which you created your RDS instance above. In this case, our RDS instance was created in “1) US East (Virginia)”.

Choose service region

Now give your application a name. The default value is the directory name. Then give your environment a name (e.g., development).

For your environment tier, select “1) WebServer::Standard::1.0”, as we are setting up a webserver.

Next, we get to select our solution stack. As of this writing, the latest version of the Amazon Linux AMI with PHP is v1.0.4 and runs PHP 5.5. We will be choosing that stack (#1).

Select solution stack

Since this will be a development application, we will be choosing a “SingleInstance” environment. This also ensure we’ll stay within the free usage tier for EC2, as long as we only have one running environment at a time. We created an RDS DB Instance separately, so we’ll answer “n” for no to the next question.

Next we need to attach an instance profile. This gives the EC2 instances that are created security permissions to access other AWS services (such as S3 for storing logs and and application versions). If you’ve created a profile already, choose that, otherwise select “1) [Create a default instance profile]”.

Choose instance profile

After a few seconds of waiting, you’re done!

So what just happened?

Congratulations, you created a new Elastic Beanstalk application, associated it with a git repo, and set some initial options for each environment that’s created. This includes which region to launch EC2 instances in, how many instances should be created, and what should be installed on those instances.

If you head over to the Elastic Beanstalk management console, you’ll see your application listed under “All Applications” (with no environments created, yet).

And if you navigate to your project directory in your code editor, you’ll see a new directory, “.elasticbeanstalk”. At the moment, this contains a config file with all the application preferences we just set.

You may also notice that your .gitignore file has changed. I wonder why… let’s take a look:

gitignore

Look at that! The new eb directory has already been added to our .gitignore file. Awesome! The line breaks just need a bit of cleaning up and it’s good to go.

Modifying Application Options

Normally, once eb start (this command starts an environment within your EB application and is covered in a later section) is run, a new file is created in the .elasticbeanstalk directory: optionsettings.app-environment-name. Since we want to set some of these options BEFORE an environment is started, we’ll create the file ourselves. This file contains all the options for Elastic Beanstalk, including options for creating new EC2 instances (and RDS instances). Refer to AWS Elastic Beanstalk Developer Guide – Option Values page for descriptions of each part of the code below. The code below is for an application that does not include an RDS instance (i.e., the RDS instance is created manually).

[aws:autoscaling:asg]
Custom Availability Zones=us-east-1a
MaxSize=1
MinSize=1

[aws:autoscaling:launchconfiguration]
EC2KeyName=sfbeta-aws
InstanceType=t2.micro

[aws:autoscaling:updatepolicy:rollingupdate]
RollingUpdateEnabled=false

[aws:ec2:vpc]
Subnets=
VPCId=

[aws:elasticbeanstalk:application]
Application Healthcheck URL=

[aws:elasticbeanstalk:application:environment]
PARAM1=
PARAM2=
PARAM3=
PARAM4=
PARAM5=

[aws:elasticbeanstalk:container:php:phpini]
allow_url_fopen=On
composer_options=--no-dev
display_errors=Off
document_root=/public
max_execution_time=60
memory_limit=256M
zlib.output_compression=Off

[aws:elasticbeanstalk:hostmanager]
LogPublicationControl=false

[aws:elasticbeanstalk:monitoring]
Automatically Terminate Unhealthy Instances=true

[aws:elasticbeanstalk:sns:topics]
Notification Endpoint=
Notification Protocol=email

First we start off with auto-scaling options. Since we’re running a single instance environment, MaxSize and MinSize are both 1. This means that EB will ensure you always have 1 EC2 instance running at all times (e.g. creating a new instance if the existing one goes down). We’re also specifying the Availability Zone we want our EC2 instances to be created in. EC2 instances in the same AZ as your RDS instance will cost less and connections between them will be faster. Set this option to the AZ in which you created your RDS instance.

The next option we’re editing is the EC2KeyName, which is the name of the key pair we created several sections ago. This will allow you to SSH into your EC2 instance(s) if the need arises (though you likely won’t need to).

The aws:elasticbeanstalk:application:environment contains your environment variables, which are added by your config files. You don’t need to edit this section, and even if you did, this doesn’t seem to actually set the environment variables (it just displays them).

Finally, we’re going to modify a few options under aws:elasticbeanstalk:container:php:phpini. We will set composer_options to –no-dev, so that dev add-ons aren’t installed when composer install is run. Last, we’ll set document_root to /public so that it points to Laravel’s public folder.

Add Environment Config Files

Located in the .ebextensions directory in the root of the project, the config files (*.config) contain commands for the environment to run and options to set. These config files run every time git aws.push is run (i.e., the environment is updated or a new EC2 instance within the environment is started), and they are run in alphabetical order. These config files SHOULD NOT be in your .gitignore file.

To start, we will create three files: 00environmentVariables.config, 01composer.config,  and 02artisan.config.

Additional resources:

Environment Variables

In the 00environmentVariables.config file, we will place all of the instructions for the application to modify the environment’s options, in this case to create environment variables (e.g., DB_HOST). Add the following code to this file:

option_settings:
   - namespace: aws:elasticbeanstalk:application:environment
     option_name: DB_HOST
     value: mysqldbname.dragegavysop.us-east-1.rds.amazonaws.com
   - option_name: DB_PORT
     value: 3306
   - option_name: DB_NAME
     value: dbname
   - option_name: DB_USER
     value: username
   - option_name: DB_PASS
     value: password

Here, namespace refers to the specific groups of options in .elasticbeanstalk/optionsettings.app-environment-name. Using the namespace elasticbeanstalk:application:environment we are stating that the options and their values below are for that namespace.

DB_HOST will be the endpoint shown in the RDS dashboard for the RDS instance set up earlier.
DB_PORT is usually 3306, unless changed during the RDS instance setup.
DB_NAME is the name of the database within the RDS instance (not the RDS instance name).
DB_USER is the username that was created during the RDS setup process.
DB_PASS is the username’s password.

Here is where you will add other environment variables, such as a MailChimp API key, an SQS host, etc.

Note: These environment variables can also be set manually in the EB Software Confirguration panel. If you do not want to have your DB credentials in your git repo, then you would manually set these variables.

Composer Commands

In the 01composer.config file, we will place all the composer commands to be run when a new instance is created or an existing instance is updated. Add the following code to this file:

commands:
   01updateComposer:
      command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update

option_settings:
   - namespace: aws:elasticbeanstalk:application:environment
     option_name: COMPOSER_HOME
     value: /root

container_commands:
   01optimize:
      command: "/usr/bin/composer.phar dump-autoload --optimize"

First, commands are executed, which are run before the application and web server are set up. Here we self-update composer.phar to ensure the latest version is running on the instance.

Next we set a COMPOSER_HOME environment variable.

Last, container commands are executed, which are for the environment’s app container. These are run after the application and web server have been set up, and these commands have access to environment variables. Here we run composer optimize.

Note: EB will automatically run composer.phar install if it sees a composer.json file in the root directory AND does not find a vendor folder in the root directory. If your vendor folder is not in .gitignore, you will need to add composer.phar install to this file yourself.

Artisan Commands

In the 02artisan.config file, we specify container commands to run migrations and seeding.

These commands should ideally be run only once, or if you are adding/modifying tables. I also tend to just migrate:refresh the database every now and then on the development server, as error tend to compound themselves and exceptions start cropping up in my app.

container_commands:
   01migrateSeed:
      command: "php artisan migrate --force"
   02seed:
      command: "php artisan db:seed --force"

Here we migrate to create the new database (including migrating the Auth Token package) and seed the database.

You’ll notice that the migrate and db:seed command were separated. Why not just run migrate –seed? In Laravel 4.2, this seems to cause an error when run on Elastic Beanstalk. Separating the two commands allows the environment to set up properly without errors when using the –force flag.

Note: The –force option needs to be used here otherwise the CLI will ask for confirmation to run each command and your commands will timeout.

References: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-options

Add Environment Variables to Database.php

Now that we have the config files set to create the database environment variables, we need to tell Laravel to use those for production.

Edit your database.php file so that your mysql connection parameters are set as below:

		'mysql' => array(
			'driver'    => 'mysql',
			'host'      => $_ENV['DB_HOST'],
			‘port’	=> $_ENV['DB_PORT’],
			'database'  => $_ENV['DB_NAME'],
			'username'  => $_ENV['DB_USER'],
			'password'  => $_ENV['DB_PASS'],
			'charset'   => 'utf8',
			'collation' => 'utf8_unicode_ci',
			'prefix'    => '',
		),

If you configured your RDS instance with your EB app (not recommended), then you could also set your connection options to:

		'mysql' => array(
			'driver'    => 'mysql',
			'host'      => $_SERVER['RDS_HOSTNAME'],
			‘port’	=> $_SERVER['RDS_PORT'],
			'database'  => $_SERVER['RDS_DB_NAME'],
			'username'  => $_SERVER['RDS_USERNAME'],
			'password'  => $_SERVER['RDS_PASSWORD'],
			'charset'   => 'utf8',
			'collation' => 'utf8_unicode_ci',
			'prefix'    => '',
		),

Note: If you are using a .env.*.php file to specify local database connection parameters, remember to add that file to your .gitignore.

If you do have a .env.local.php file, it would look something like this:

<?php return array( 	'DB_HOST' => 'hostname',
	'DB_PORT' => 'port',
	'DB_NAME' => 'dbname',
	'DB_USER' => 'username'
	'DB_PASS' => 'password',
	
);

Git Commit

With all of these changes now complete, commit these changes to your git repo.

Add Security Group Inbound Rule

The last thing left to do before starting up your environment is to give the Elastic Beanstalk application access to the MySQL RDS instance.

Head to the AWS EC2 management console and click on “Security Groups” under “Network & Security”. Here you should see two security groups: the default security group and a security group created for you Elastic Beanstalk application.

Click on the Elastic Beanstalk security group (for us, it’s called development, just like our environment) and copy the Group ID.

Security Groups

Right click on the default security group and select “Edit Inbound Rules”.

Here we need to add a MySQL type rule with a Custom IP equal to the Elastic Beanstalk security group’s Group ID. While you’re here, add a MySQL type rule for your IP [Tip: in the Source drop-down, you can simply select My IP].

Inbound Rules

Start the EB environment

Run eb start in the command line/terminal. Choose “no” if you are asked whether you want to use the latest commit for the environment.

AWS will now set up all the resources necessary for you environment. This may take some some time. Once complete, it will give you a URL at which you can access your server.

You can also view the status of your environment setup in the Elastic Beanstalk management console. Here you’ll see that’s it’s running Amazon’s Sample Application, but if you try to visit the URL you won’t receive a response. This is because the server is pointed to the /public folder where it would find Laravel, but you haven’t pushed your Laravel app onto the environment yet.

Run eb status to see the status of your environment (you can also see your environment’s status in the management console). If it’s green, move on to the next section.

Git Push

At this point, your environment is ready to update with your Laravel application.

Run git aws.push. After a few moments, it will upload your git repo to the environment and begin updating the environment.

If you run eb status at this point (or visit the management console), you’ll see the environment is still updating. I find it’s best to view the environment status via the management console, as you can see a running list of events below the status.

If an error occurs during the update, the event will list the command that caused the error (usually it’s an artisan command for me). Go to the Logs option in the left-hand menu and click the Snapshot Logs button. Once a log snapshot is available, click the “View log file” link to view the latest logs. Here you can investigate why an error occured. Just search for the command name that cause the error to occur.

Snapshot Logs

Done!

And that’s it! Navigate to the URL for your environment (e.g., environmentname-hdb582lbjd.elasticbeanstalk.com) to see your Laravel application live.

Deleting your Environment

Since we’ve added this environment’s Security Group ID to the inbound rules of the default Security Group, we need to first remove that rule in the EC2 Management Console.

After that rule is removed, deleting the environment is a simple command line/terminal command:

eb stop

Alternatively, you can terminate the environment from the management console from within your environment.

This may take some time as all the AWS resources created for your environment need to be deleted. You can monitor progress from the command line/terminal or from the Elastic Beanstalk Management Console.

Deleting your Application

To delete your application, ensure all environments have been successfully terminated. If there are un-terminated environments, check the event log for errors in that environment during the termination process.

Now run the command:

eb delete

A few seconds later you application will be deleted. This can also be done from the management console.

As I mentioned in my first post, when I started using Laravel I knew nothing about the concept of MVC. It was difficult to transition from writing pure PHP (which I had only learned 6 months prior to working with Laravel) to an MVC framework. There a lot of great resources for MVC noobs —tut+’s MVC for Noobs is one—and introductions to Laravel—Laracast’s free Laravel From Scratch series is one, Laravel Book’s Architecture of Laravel Applications (http://laravelbook.com/laravel-architecture/) is another—but let’s quickly recap the basic concepts.

The MVC Pattern

At its core, the MVC architectural pattern exists to help with the Separation of Concerns (https://en.wikipedia.org/wiki/Separation_of_concerns) in your code.  The MVC pattern consists of:

  • Models: represent stored data and enforce “business” rules/logic on the data (in Laravel, a model is analogous to a table in your database)
  • Views: present data to the user
  • Controllers: mediate between the View and the Model

In Laravel 4, the app directory has folders for controllers, models, and views. You would expect the typical flow of information would go like this:

  1. A route is invoked
  2. The route calls a function within a controller
  3. The controller uses a model to access data
  4. The controller passes that data to a view
  5. The view displays the data to the user

And in it’s most simplest form, that’s exactly how the Laravel framework works. Looking at the files that are included in a new Laravel install (at least in Laravel 4.x, this is changing with Laravel 5.x), this made sense. And so I moved forward believing that everything must fall into a model, a view, or a controller.

My Controllers Needs A Diet

The first feature I created for the SimpliFit beta API was simple: show all the habits a user has learned since they started with SimpliFit. We called this feature Achievements. “Easy enough,” I thought as I created a few models, coded the relationships, and seeded the database. In the end, my one controller function had ballooned to over 100 lines of code while my models were at about 30. I tested the feature and it worked.

Then I heard the saying “skinny controller, fat model”. Reality check time: my controller was pretty fat. Since I was coding an API, we didn’t have any views in Laravel for this feature. That just left the model, and so I thought all the business logic I need must go into the model.

As I coded more features, I ensured that my controllers only passed data between the user on the front end and the database, and I moved all the logic into the models. Even with this, I still have controllers that are 400+ lines of code (and only two functions), but my models are beastly—I have several models that are over 2,000 lines of code!

This is obviously a maintainability nightmare. With 49 models and 20 controllers in total and thousands of lines of code, I knew there had to be a better way of organizing my code. The standard MVC principles weren’t cutting it for me.

Repositories, Interfaces, and Commands, oh my!

When I was in the midst of coding SimpliFit beta v1 in June and July, I thought I had a good grasp on Laravel. I was able to code features and they worked as intended. At the time, I wasn’t concerned with coding to best practices, I just needed something that worked.

This is about the same time I subscribed to Laracasts.com (well worth the $9 per month) and started learning there was so much more to Laravel than models, views, and controllers. I’m still trying to wrap my mind around all the concepts, but now I know that I can employ various types of coding patterns and ideas (beyond models, views, and controllers):

  • repositories,
  • interfaces,
  • service providers,
  • commands,
  • events,
  • presenters,
  • entities, and
  • more I probably haven’t learned about yet.

A word of caution

A lot of beginner Laravel sources tend to oversimplify MVC concepts. Some sources even have you put code into your routes.php file, which, although won’t break anything in Laravel, it does go counter to the Separation of Concerns principle. When I first started, I thought I had to stick to models, views, or controllers. I didn’t know that there were more options.

If you’re an MVC noob like I was when I started with Laravel, if you take anything away from this article, let it be this: don’t constrain your code to just models, views, and controllers—learn about the other options that exist for organizing your code. Even if you don’t adopt those practices at first, you need to know about them so you can make more educated decisions about your app structure.

If I could back to when I first started with Laravel, would I force myself to adopt these other coding patterns and ideas? Maybe. For us, getting our app out to users for testing was our first priority, and the faster we could achieve that, the better. But if I had known about my options, I may have at least coded in a way that would have allowed me to adopt these patterns down the road more easily.

 

Getting Set Up For Laravel

There are a ton of articles about how to set up Laravel and the author’s tools of choice, and I debated whether or not the Laravel community needs another. But seeing as I’ll be writing a lot of articles about Laravel and there may be people who, like me, have never used an MVC framework or Git before, I figured I’ll just write up a quick post. First up…

Setting up Git

If you were like me when you started coding, whenever you created a new version of a file, you’d save the old file with a date or previous version number tacked onto the file name. This quickly got out of hand and managing different versions of files became a huge headache for our alpha. This is where Git can help.

Continue reading

Laravel - the PHP framework for artisans.It was February 2nd, and I was heading to a Super Bowl/birthday party. I had quit my job only a month before to work full-time on my startup, SimpliFit (then called TrackFaster) and was coding the hobbled-together backend in PHP for our third Alpha version (more on Alpha v0.3 and lessons learned from that some other time).

At the party, a mutual friend introduced me to Jonathan Stassen, and I gave him the usual pitch. He just so happened to also be working at a startup and was learning a PHP framework called Laravel.

Continue reading