Hot on the heels of my Laravel on AWS Elastic Beanstalk Dev Guide (i.e., 2.5 years later), I’m happy to publish my Laravel 5 on AWS Elastic Beanstalk Production Guide! So let’s dive right into it.


There have been a lot of changes with Laravel and with Elastic Beanstalk since my Dev Guide in 2014. This guide steps you through the process of deploying a Laravel 5 app to AWS Elastic Beanstalk.

I presented this deployment flow during the February LaravelSF Meetup event at the AWS Loft in San Francisco. During my presentation I used my demo app, LaraSqrrl, to show the process, along with integration with S3 and SQS (upcoming posts).


As in my previous guide, there are some prerequisites before you jump into the guide:

  • You should have an AWS account.
  • You have git installed and a git repo initiated for your project.
  • You should familiarize yourself with Elastic Beanstalk.

RDS Database

I like to keep my RDS instance separated from Elastic Beanstalk, as RDS instances created by and associated with an Elastic Beanstalk environment will be terminated when the environment is terminated. Separating the RDS instance from the environment allows you to keep the same database regardless of the environment.

Step 1: Choose your engine

Choose whichever database engine your app is set up for. I generally use MySQL with Laravel, so that’s what I’ll select.


Step 2. Production vs. Development

For production apps, Multi-AZ deployment is recommended. The advantage here is that if one server goes down for any reason, you’ll have a backup ready to go. And if you choose to change any options on your database (e.g., increase allocated storage), the servers will be updated one at a time, meaning your database never suffers any down-time and seamlessly switches to the updated server.


Step 3. Specify Database Details

Now you’re ready to set up the details of your database. The important bits here are:

  • DB Instance Class – The size of the RDS instance. Choosing the right size is a difficult task, but db.t2.small is probably the smallest you should use for any production app. You can always upgrade the size as your app grows.
  • Multi-AZ Deployment – Enable or disable deploying to multiple Availability Zones (keep in mind this means you’ll be paying for 2 instances of the size specified).


Step 4: Network, Security, Database Name, Backup, Monitoring, and Maintenance

This is a pretty big sections, but we’ll start with Network and Security options. I’d suggest leaving everything as the default, but we want to specify our VPC Security Group. There are 2 ways that I’ve set this up in the past (though I’m certain you can do it many other ways as well, so leave your experiences in the comments!):

  1. Use the default security group. I’ve done this when I plan to have one RDS instance host databases for several Elastic Beanstalk apps. I treat the default security group as the “database security group” and allow inbound connections for only my IP and the Elastic Beanstalk apps using the DB.
  2. Use the AWSEBSecuritygroup security group. I’ve done this when I will have only one Elastic Beanstalk instance access the RDS instance. Be sure to select the AWSEBSecuritygroup group (like in the screenshot below) and not the AWSEBLoadBalancer security group, as it’s the EC2 instances in theAWSEBSecuritygroup group accessing RDS and not the load balancer!


After that you have some additional database options. There is only really one field of note here, and that’s Database Name. This is the name of the database schema that will be created within the MySQL database. You can also change the port, though I tend to leave it at the default.

And finally we have Backup, Monitoring, and Maintenance. For Backup retention period, adjust this based on the data you’ll be storing. Generally only a few days is enough, and 7 days feels like overkill. If you don’t want any backup retention, just change this value to 0. When you do that, you’ll get this lovely warning:


Then adjust your Backup Window based on when you think your app will be used the least during the day.

Enhanced monitoring can give you insight into a lot of specific of your RDS instance, but you’ll have to pay for data transfer charges to CloudWatch Logs. Check out this AWS blog post about enhanced monitoring for more details.

Last but not least, is server maintenance. I would recommend leaving Auto Minor Version Upgrade as yes (to get bug fix updates), and change the Maintenance Window to a period of the day where your app will be used the least.

We’re ready to go now, so go ahead and click “Launch DB Instance”! It’ll take a bit for the instance to launch, but you can move on to the next section.


Security Credentials and Key Pair

If you followed my Elastic Beanstalk Dev Guide, then you should already have your AWS access key and a Key Pair so you can SSH into your EC2 instances. If not, visit that post and read those 2 sections (cmd+f or ctrl+f and search for “Generate Security Credentials”, since I don’t have in-page linking set up). Alternatively, skip the section on creating a Key Pair and create it during the Elastic Beanstalk setup process.

Keep these access keys in an easy-to-access location as you’ll need them in just a bit.

Install the new EB CLI

The easiest way to install the new EB CLI is via homebrew:

brew install awsebcli

And that’s it. If you don’t have homebrew or want to see the other ways to install the EB CLI, check out AWS’s EB CLI installation guide.

Initialize Your Elastic Beanstalk App

Open your command prompt/terminal and follow along to initialize Elastic Beanstalk for your app. Navigate to the root directory of your Laravel app and run the following command:

eb init

You’ll be prompted to set up a number of parameters for your app and account. First up, default service region:


Set this up in the region from which you think your app will be accessed the most. I usually default to US East.

Next you’ll have to enter your AWS credentials. These are the keys you either generated a few sections above or already had set up.


Now you’ll name your Elastic Beanstalk Application. Think of the application as the high-level collection for all the components (i.e., individual deployments, versions, and configs) for your app.


After your application name, the EB CLI will auto-detect the language you’re using, in this case PHP, and set the version of that language to use. The latest PHP version at the moment is 5.6. If you required PHP 7, you’ll need to configure your app to use a custom EC2 AMI (Amazon Machine Image). This is out of the scope of this post, but you can check out the Elastic Beanstalk documentation for Creating a Custom Amazon Machine Image (AMI).


Last up we have setting up SSH. This is an optional step, but I strong recommend you set this up so you can access your EC2 instances via SSH. This is where you’ll specify your Key Pair name if you’ve already generated one or generate a new one. If you generate a new Key Pair, you’ll need to optionally specify a passphrase which you’ll enter every time you use your Key Pair to SSH into an EC2 instance.


And that’s it! Your Elastic Beanstalk app is now set up. Head over to the Elastic Beanstalk Management Console and you’ll see your app with no environments yet.


If you check out your project directory, you’ll see a new .elasticbeanstalk directory. This directory has also been added to your .gitignore file. In the directory you’ll find a config.yml file that specifies all the settings we just chose:

config yml

Create the Elastic Beanstalk Environment

Now we’re ready to set up our EB environment. If you followed my Elastic Beanstalk Dev Guide, then you remember being prompted to set the various EB config options before you launched anything, but the new EB CLI changes this process a bit. When you launch an EB environment, it uses some default configuration options, but you no longer have the ability to set those options through the CLI via prompts (e.g., single instance vs. load balanced environment) before you launch. You can create an environment and pass some options with the eb create command, but some important settings are missing (e.g., composer install option and document root).

You have 2 routes to deal with this:

  1. If you’ve already created an EB environment and want to use the same environment configuration options, you can save that environment’s configuration and use it to create a new environment. If this is you, then start at Step 2
  2. If you’ve never created an EB environment, then you’ll need to create an environment first, download the saved configuration, modify the configuration options, save the configuration, then use it to create a new environment. For this, start at Step 1 below.

Step 1: Create the default environment

If you want to create an auto-scaling environment, run:

eb create environment-name -i t2.micro --scale 2 

If you want to create a single-instance environment, run:

eb create environment-name -i t2.micro --single 

You’ll want to input your own values for the various options, but here are what they mean:

  1. -i value – instance type, check out the list of EC2 Instance Types to choose one that fits your needs.
  2. --scale value – auto-scaling group starting size
  3. --single – specify a single-instance environment

If you just type eb create without any options, you’ll be prompted to specify some information (such as environment name). Here’s what that looks like:


This process can take some time. Whether you pass in options to the eb create statement or not, once the environment is created you’ll see:


The eb create method has several other parameters you can specify, so I encourage you check those out on the eb create reference page.

Step 2: Save the environment configuration locally, modify, and upload to S3

Next up we want to modify our environment to our needs for a Laravel app. First up, run eb config save --cfg configName where configName is the name of your choice for this config file.

eb config save

This downloads the config file being used for the current environment to your local environment. Open this file in your favorite editor. Here’s what my config file looks like for an auto-scaling group of 1 instance:

The most important part of the config file is the aws:elasticbeanstalk:container:php:phpini: parameters, which specify the server to server from the /public folder and to run composer with the --no-dev option.

I would recommend that you do not copy my config file and overwrite yours, as some of the IDs and group names will be different. Only choose the pieces that you need.

To read more about these yaml config files, check out AWS’s documentation on the Environment Manifest

Alright, we have our config file modified, now we need to upload it to S3 so we can use the config to launch new environments. Run eb config put configName where configName is what you used in the first download step.

These config files can be confusing, and I’d recommend you read AWS’s documentation to learn a bit more:

Step 3: Create a new environment using the saved configuration

Now that you have a config file for Elastic Beanstalk set up for Laravel, you can either:

  1. Terminate your current environment and create a new environment using the configuration file. To do this, run:
    1. eb terminate environment_name
    2. eb create environment_name --cfg configName
  2. Modify the existing running environment by running eb config --cfg configName

I’ve had mixed results with modifying the running environment, so I usually just start with a clean slate. I encourage you to try both methods and see which works best for you.

Elastic Beanstalk Config Files

Although we have the Elastic Beanstalk environment set up, we still need to create some configuration files that Elastic Beanstalk will run at deploy time. This includes setting up composer, cron, and environment variables.

When you first run eb init for your repo, the eb CLI creates a .ebextensions folder in the root directory of your repo. This is where you put configuration files to run at deploy time. For more information about these files, check out the Advanced Customization With Configuration Files (.ebextensions) documentation.


  • The config files generally run in order of name, which is why I number my files to ensure specific order. There are some caveats, where certain types of commands are run first regardless, and that information can be found in the above documentation.
  • Be sure to commit these config files to your repo, otherwise they won’t be part of the deploy to Elastic Beanstalk!


The setup.config is pretty simple:

We set COMPOSER_HOME, update Composer, and optimize Composer. I’ve also included the document_root and composer_options parameters in here as well in case you’d prefer to set these options here instead of the environmen.cfg.yaml file.


Committing your .env file to your repo is bad practice, and can be dangerous for public repos. Committing your environment variables to Elastic Beanstalk .config files is also bad practice. So where does that leave you? Well, my preferred method is to keep a production .env file in a private S3 bucket, and pull it in while deploying.

Here’s my config script for that:

What this script does is apply an S3 role on the instance to access the app-env bucket where I have my production .env file. Next is fetches the file from S3 and moves it to a temporary folder. Lastly, it moves the file to the the /var/app/ondeck directory, which is where the currently deploying app is set up before being moved to current folder.

Before this script can do it’s magic, though, we need to set up S3!

Go to S3 and create a new bucket. In my case, I’m naming it for my app, larasqrrl-env.
larasqrrl-env bucket

Next, click on the newly created bucket, go to its properties, and go to permissions.
bucket policy

Add or edit the bucket policy to the following:

The “AWS” parameter is the role ARN that will have access to this bucket. You can get your elastic beanstalk role ARN by going to the IAM console and clicking on roles.

iam roles

Select the aws-elasticbeanstalk-ec2-role (or the custom EC2 role you created if you chose to do so), and copy the Role ARN. Use this ARM in your bucket policy above.

With all that set up, go ahead and upload your .env file to your S3 bucket.


When deploying your Laravel app for the first time, you will likely need to run a migration and maybe seed the database. This file contains those Artisan commands.

After the first deploy, comment out the db:seed command and the migrate command if it isn’t needed. You’ll notice these two commands have the leader_only: true parameter, indicating that these commands should not run on each EC2 instance if you have more than 1 instance in your auto-scaling group.

The remaining commands clear the cache, optimize Artisan, and finally set the proper permissions on the Laravel folder.


This file will setup supervisor to monitor your queue workers. If you will have separate queue worker instances, then this file isn’t necessary for this environment.

Note: After the first deploy, remove this file fully, as re-running these commands will cause errors on deploy. I’m looking into ways to optionally do this when running updates to ensure when your app auto-scales, supervisor starts up on the new EC2 instances. If you have suggestions, please leave a comment!


This file set up the Artisan Scheduler cron job for your app. If your app won’t be using Scheduler, omit this file from your config files.

Deploy your Laravel app

Alright, we have our Elastic Beanstalk environment configured and our deploy config files set! We’re finally ready to deploy. Luckily, this part is easy. Just run:

eb deploy environment_name

That’s it! You’re up and running on Elastic Beanstalk! Navigate to the URL for your environment (e.g., to see your Laravel application live.

Deleting your Environment

As with the dev guide, since we’ve added this environment’s Security Group ID to the inbound rules of the default Security Group, we need to first remove that rule in the EC2 Management Console.

After that rule is removed, deleting the environment is a simple command line/terminal command:

eb terminate environment_name


Did I miss something? Have questions about the process? Leave a comment!

Hey there! I know it’s been a while since I’ve posted. Since my last post, the team and I decided to shut down SimpliFit, and I’m at! Though I’m sad that SimpliFit did not work out, I’m excited about the next stage in my career at Infuse.

I’m also happy to say that in September I became the main organizer for the LaravelSF Meetup. Lot’s of exciting stuff coming up, and I’ll have a new post this week for deploying Laravel 5 to Elastic Beanstalk.

On to bigger and better things!

P.S. You may have noticed that the blog sidebar is a bit outdate. That’s definitely on my to-do list, so expect a site refresh within the next few weeks (realistically months)!

Whoah! It’s been awhile since my last post. In that time SimpliFit launched the SimpliFit Weight Loss Coach for Android and iOS, the SimpliFit Coaching program, and, just last week, Magical, a texting-based picture calorie tracker. Yes, we’ve been quite busy! And it’s given me a lot more content for future blog posts.

Today, though, I’ll build on my previous post, Validating iOS In-App Purchases With Laravel, by covering Android in-app purchases (IAPs) in Laravel. The process was completed for a Laravel 4 app, but the code I’ll be showing can be used in Laravel 5 as well.

The workflows between iOS and Android are fairly similar, and just as working with iOS IAPs was frustrating due to lackluster documentation, working with Android IAPs is just as frustrating. If Apple and Google would take the best parts of each of their IAP systems, you would get quite a good system. As it stands, though, both systems make you wish your app didn’t have IAPs.

But if you’re reading this, then you probably have the unenviable task of adding IAP verification to your app.  First, I’d recommend you get familiarized with Google’s In-app Billing documentation and skim the Google Play Developer API. Also, I’m assuming that you’ve already created your app in your Google Play Developer Console and added the in-app products you’ll be offering.

On the front-end, we are again using the Cordova Purchase Plugin to mediate between our app and Google Play via Google’s In-App Billing Service. Since the SimpliFit app only had a monthly subscription (we since made the app free), I’ll be discussing how to work with in-app auto-renewing subscriptions, however this guide can easily be applied to a one-time IAP.

Android IAP Workflow

Just as with iOS, we have three IAP stages: 1) Retrieve product information, 2) Request payment, and 3) Deliver the product.

Stage 3 was the most involved for Laravel for iOS IAPs, and for Android the steps increase. Take a look:

Android In-App Purchase Flow

  1. The Android app requests the list of products from Laravel
  2. Laravel returns the list of product identifiers currently available for purchase
  3. The Android app sends these product identifiers to In-App Billing Service
  4. The In-App Billing Service requests product information based on product identifiers
  5. The Play Store returns production information (title, price, etc.)
  6. The In-App Billing Service returns the production information from the Play Store
  7. The Android app displays the products to the user
  8. The user selects a product to purchase
  9. The Android app requests payment for product
  10. The In-App Billing Service prepares for a purchase by requesting the user’s Google Wallet password
  11. The user enters his/her password
  12. The In-App Billing Service sends the purchase request to the Play Store
  13. The Play Store processes the purchase and returns a purchase receipt
  14. The In-App Billing Service sends the receipt to the Android app
  15. The Android app sends the receipt data to Laravel for validation
  16. Laravel records the receipt data to create an audit trail
  17. Laravel Authenticates itself with Google’s API servers using a Service Account
  18. Google authenticates the Service Account and returns an access key
  19. Laravel uses the access key to query the user’s purchase from the receipt’s purchase token
  20. The Play Store locates the purchase and returns a purchase resource
  21. Laravel reads the purchase resource and verifies the purchase
  22. Laravel unlocks the purchased content and notifies the Android app

As you can see, steps 1-16 are identical to the iOS IAP workflow. However, to communicate with Google’s API server, your server must authenticate itself first using OAuth. Thankfully, Google has a PHP library we can pull into Laravel to simplify things. We’ll get to that shortly.


  • Steps 1 and 2 can be accomplished via hardcoding the product identifiers in the Android app, just as in the iOS IAP workflow, rather than requesting them from the server. I strongly recommend making the extra call to get the products from the server–it’s a small hit to the loading time and allows you to modify products without having to update the app.
  • Steps 10 & 11 will not occur if the user made a purchase with his/her Google Wallet within the last 30 minutes.
  • Just as with iOS, steps 17-22 are not necessary but are highly recommended to verify purchases. If you are working with subscriptions, in particular, this process should be required. We never had someone try to fake a subscription in iOS, but we did have attempts on Android. If we had not had receipt verification on our server, those users would have received full access to our app as if they had paid.

Retrieving Product Information and Requesting Payment

I don’t want to repeat myself, so please check my previous blog post for iOS on these two stages of the IAP process. I’m using the exact same models in Laravel as in the iOS workflow.

If you need guidance on setting up in-app products, check out Google’s documentation on Administering In-app Billing. We chose to make our product identifier for our subscription the same in Android and iOS to help us keep things more organized (i.e.,

And just as you can create test users in iOS to test your IAPs, Google allows you to grant gmail accounts test access in the Google Play Developer Console under Settings->Account details. Two things to note here, though:

  1. The owner of the developer account cannot purchase products from him/herself and thus cannot be a test user.
  2. As of November 2013, Google had no method to test in-app subscriptions, only one-time in-app purchases. This was a MAJOR blunder on Google’s part. We ended up having to make actual subscription purchases and refund them to test the full flow. But now it does seem that Google has added the ability to test subscriptions, according to their Testing In-app Billing documentation, though I have not tested this yet.

Delivering Products

On a successful purchase, the Play Store will return a receipt to the Android app, which, through the Cordova Purchase Plugin, is sent via JSON in the following format:

NOTE: This receipt structure is specific to the Cordova Purchase Plugin. If your app uses a different method to access the Google In-App Billing Service API, the receipt structure may differ. Refer to Google’s In-App Billing API for more information on receipt data.

Before we start breaking this receipt down, note that the “receipt” sub-parameter is a JSON object in string format. I suspect this is just the way the Cordova Purchase Plugin processes the Play Store receipt.

Breaking down the receipt by parameter, here’s what we have:

  • The “type” specifies this is an “android-playstore” purchase (as opposed to “ios-appstore” for an iOS purchase)
  • The “id” parameter is the Google Wallet Merchant Order Number.
  • The “purchaseToken” uniquely identifies a purchase for a given item and user pair and is generated by Google.
  • The “receipt” parameter contains a string with the JSON receipt. This contains:
    • “orderId”: the same identifier as “id” above
    • “packageName”: your Android app’s package name
    • “productId”: the identifier of the product which the user purchased
    • “purchaseTime”: the time the purchase was made in milliseconds since Epoch
    • “purchaseState”: the state of the order (0 = purchased, 1 = canceled, 2 = refunded)
    • “purchaseToken”: the same token as in “purchaseToken” above
  • The “signature” parameter contains the signature of the purchase data signed with the developer’s private key. This can be used to verify, in the mobile app itself, that the receipt came from Google if you do not want to use the server verification method.

Great, we have a receipt! Let’s move on to storing the data to create an audit trail. Before we do, let me remind you that I use Laracast’s commander pattern in Laravel 4. As such, all the code below is from my StoreTransactionCommandHandler class and sub-classes. If I were to do the same in Laravel 5, all the logic from the handler class would simply be added to the command class itself.

Store the receipt data

Now that we have both iOS and Android IAPs available to users, and since the purchase receipt handling is different for the two platforms, we need to know from which platform the incoming purchase came. Luckily, our mobile app sends a Client-Platform header with every request to our server, which indicates android or ios. So we’ll switch between our Android and iOS receipt handling logic based on that header.

For Android receipts, the data that is of most interest to us is the “receipt” sub-parameter, which is a string of JSON data. So first, let’s get that into a format we can work with, then we store it:

What I do here is grab the receipt from the command and decode the “receipt” string JSON into an associative array called $receipt. Then I store this receipt as a pending transaction.

Since my iOS guide, I’ve added a new error flow here, where I check to see if the purchaseToken parameter is present before sending it on to be saved into the database.

Note: At this point in the iOS workflow, we also set the URL endpoint to which we send receipt data for validation. For Android, however, Google does not offer a sandbox URL endpoint. If you want to test  your IAPs, take a look at the Testing In-App Billing documentation.

Allow server access to the Google API

Before being able to validate a receipt with Google’s servers, we need the necessary data to authenticate with Google’s servers.

First, go to the Google Play Developer Console API Access Settings. Under “Linked Projects”, if you already have a project listed, select “Link” next to the existing project (if it isn’t already linked). If there are no existing projects, you’ll need to create a new project by clicking “Create new project”. This creates a new linked project titled “Google Play Android Developer”.

Next we need to create a service account to access the Google API. This service account is in effect a “user” that has permissions to access your Google Play Developer resources via the API. To create the service account, click the “Create Service Account” button at the bottom of the page. This will open a modal with instructions for setting up a service account. Go ahead and follow the instructions. When you’re on the step of creating the client ID, be sure to select “P12 Key” as the key type.

Screen Shot 2015-06-08 at 10.01.43 PM

Once you click “Create Client ID” in the modal, the page will automatically download the P12 key file and will display the key file’s password. This key file will need to be accessed by your server to authenticate with Google but should not be accessible publicly. Once the modal closes, you’ll see the service account’s information. The only relevant piece we need (other than the downloaded key file) is the account’s email address.

Screen Shot 2015-06-08 at 4.02.42 PM

We also need to enable the service account to use the Google Play Developer API. In the left-hand side menu, under “APIs & auth”, click “APIs” to view all the individual service APIs accessible via the Google Play Developer API.

Screen Shot 2015-06-09 at 8.13.58 AM

Then under “Mobile APIs”, select “Google Play Developer API”. On the Google Play Developer API page, select “Enable API”.

Screen Shot 2015-06-09 at 8.16.04 AM

Now the service account has access to the Google Play Developer API. If your server needs access to other APIs, find the appropriate APIs on the previous page and enable them.

Go back to the Google Play Developer Console and click “Done” in the open modal. The page will refresh and you should see your service account listed. Now just click “Grant Access”, and you’ll be asked to select the role and permissions for the service account. Since the server will only be getting information on existing IAPs, select only the “View financial reports” permission and click “Add User” (if you want your server to do more with the API, select the relevant permissions you’ll need).

Make the Request

Now that we have the necessary account details and permissions to access the Google API, we’re ready to make the request to Google. To help us with communicating with Google’s server, let’s pull in the Google PHP Client package to Laravel via the composer.json file:

"google/apiclient": "1.1.4"

As of this publication, the latest version of the Google PHP Client package is 1.1.4. According to the API Client Library for PHP, the Google PHP library is still in a beta state. This means that Google may introduce breaking changes into the library. The good news is that the API itself is in version 3, so that will likely remain stable for some time. For these reasons, I suggest stating the specific version of the client library you want to stick with.

With that package pulled in (after running composer update), here’s my request code with comments to guide you through it:

As you can see, the only data needed from the receipt when making the request is the purchaseToken value (no need to build a JSON object as in the iOS process). The final line (the get on purchases_subscriptions) also includes the extra authentication step noted in the Android IAP Workflow section. If that authentication step fails, a Google_Auth_Exception is thrown. I wrapped the entire code in a try/catch so that if the call results in said exception, we throw our own exception. We then handle our exception at a higher level.

The mystery of the unnecessary P12 key file password

As a quick aside, you may have noticed that we didn’t pass in the key file’s password when creating the assertion credentials. In fact, we don’t need to use that password we received with the key file at all. If you take a look into the PHP client library on GitHub at line 57 of the AssertionaCredentials.php file, which contains the Google_Auth_AssertionCredentials class, you’ll see that the __construct function’s fourth argument is the key file’s password, and the default value is “notasecret”. As of this writing, all service account P12 key file passwords are “notasecret”. If that is no longer the case when you read this article, simply add the password as a fourth argument when creating the assertion credentials.


A valid subscription response will be in the form of:

This is detailed in the Android Publisher API documentation of the Purchases.subscriptions resource.

An error response will look like:

If you’re validating a one-time product purchase receipt instead of a subscription, change the purchase_subscriptions get request to:

$product - $service->purchases_products->get($packageName, $productId, $purchaseToken);

The valid product response contents are detailed in the Purchases.products resource documentation.

Validate the response

Awesome! We have a response from Google. Let’s see if the response indicates that the receipt we received from the app was valid:

The code above is specific to validating a subscription response. And since it is a subscription, we need to check if the subscription has already expired. This compares the current UTC time to the UTC time created from the subscription response. If you’re server’s default timezone is not set to UTC, then you will need to convert the expiration time to your timezone or convert the current time in your timezone to UTC time.

Store the validated receipt and start the subscription

Almost done! Knowing we have a valid subscription receipt, we store the transaction information in our database, add an active subscription for the user to the database, and then respond OK to the mobile app’s receipt validation request to indicate the user’s purchase completed properly and was indeed valid.

Closing Notes

Validating in-app purchases for Android or iOS is a lengthy process, and, as I mentioned, the documentation isn’t always clear and laid out in a logical step-by-step manner. Hopefully this guide helps you implement the process in your own app. This should also give you a good start to implement other features, such as canceling/refunding subscriptions via the app (rather than having to do this through the Google Wallet Merchant Center).

Regarding subscriptions specifically, unlike for iOS (at least in our case), an Android subscription does have a time length (for us it was one month) and will auto-renew. There are ways to set up webhooks through the Google Play Developer API Console, and perhaps you can set one specifically for listening if a user’s subscription doesn’t auto-renew or a user cancels it via the Play store. I found that the simpler solution was to just check with Google, at the subscription period’s expiration date and time, if the subscription is still active. You can use the same purchaseToken as in the original receipt (another reason to store that receipt) and follow the same process to make another call to the Purchases.subscriptions API resource. If the subscription was successfully renewed, then the expiration time in the response will again be in the future, meaning the user has been successfully charged for another subscription term. When this happens, the order number is slightly altered after each successful subscription renewal after the initial purchase:

12999556515565155651.5565135565155651 (base order number)
12999556515565155651.5565135565155651..0 (first recurrence orderID)
12999556515565155651.5565135565155651..1 (second recurrence orderID)
12999556515565155651.5565135565155651..2 (third recurrence orderID)

This is detailed on the In-app Subscriptions page in the Subscription Order Numbers section. I keep track of the number of renewals via a counter column in my subscriptions table. And if I ever would have a need to get a specific recurrence of a subscription, I could construct that recurrence’s orderID from the renewal counter and the original order ID.

And a note on using the Google Play Developer Android Publisher API: Google lays out some best practices for using the API efficiently:

  • If you’re publishing apps via the API, limit publish calls to one per day,
  • Only query purchase status at the time of a new purchase,
  • Store purchase data on your server, and
  • Only query subscription status at the time of renewal.

These are good practices to follow, as your developer account does have a request quota per day (200,000 API requests per day “as a courtesy”). Also, this helps decrease the number of HTTP requests between your server and Google, speeding up server responses to your mobile app.

And one more thing…

On Twitter, @dror3go asked about dealing with restoring a purchase if a user gets a new phone:

It’s an interesting question, and there are three ways I can think of to deal with this:

  1. Have your server control all in-app features, including anything unlocked via purchases. For our app, whenever the user opened the mobile app, the app validated an encrypted API token in the app’s local storage with the Laravel server. During that validation, the server would also check if the associated user’s subscription was still active. If not, the server would return a specific error, and the app would throw up a blocking modal requiring the user to purchase or renew their subscription. Otherwise the server would respond with a 200 status code and the user continues to use the mobile app without any restrictions. The downside of this strategy is that this check will happen every time the user opens the app (unless the app was still active in the background), but this also means that if the user moves to a new phone or even switches to a different phone OS, the app will still work and all of the user’s purchases come with the user.
  2. As in #1, store the in-app features on the server, but also store that information in local storage on the phone. With this strategy, you will need to encrypt the data, so that someone cannot tamper with the data and give themselves a paid feature. If a user switches phones, the server will need to recognize that situation and instruct the app to enable the features for which the user paid. The disadvantage here is that now the mobile app itself may need logic to determine if a subscription is still active, but the server now has less processing to do.
  3. And finally, the last strategy is to have the mobile app control all in-app purchase product restoration. No (or limited) purchase data is stored and processed on the server. Instead, the app stores purchase data in local storage. And if the user switches phones, the app will have to have a way to restore these purchases. It would do this by checking if there is anything saved in local storage, and if not, it would have to do a call to the Google Play Store to see if the user has purchased any of the products for the app. In theory this is possible, though I have not investigated this strategy myself.

My vote is for strategy #1. This offloads as much logic from the client onto the server. Theoretically, this means the client app will run faster, and if the server begins to use more resources, we can easily scale using AWS.

Validating iOS In-App Purchases With Laravel

Let’s say you have an iOS app (like SimpliFit), you’re using Laravel (or any PHP framework) for your API, and you want to validate an iOS in-app purchase (IAP) with Apple’s servers. Where to begin? Well, you might think about checking Apple’s documentation on Validating Receipts With The App Store. But that won’t help, at least not much.

To say that Apple’s documentation around IAPs is lacking is an understatement. So I turned to Google and found numerous StackOverflow posts, Github repos and gists, and blog posts concerning validating IAPs, specifically with PHP. Unfortunately, many were several years old and no longer accurate, leaving me confused and with a puzzle with many missing pieces. But I’ve finally solved it. And to help everyone else going through this agony, here’s what I’ve learned.

But first, a quick review of our setup. On the front-end, SimpliFit’s iOS app is built using AngularJS and Cordova, with purchases being handled via the Cordova Purchase Plugin. The Cordova Purchase Plugin takes care of communicating with Apple’s Store Kit framework (which in turn communicates with Apple’s App Store, see image below). And on the back-end, our API is built with Laravel on AWS.

Communication between iOS app, Store Kit, and App Store


The IAP Process Breakdown

Let’s breakdown this process into 3 stages, as outlined in Apple’s documentation:

  1. First, the products that the user can purchase are retrieved and displayed.
  2. Once the user selects a product, a payment request is initiated.
  3. Upon successful payment, the product is delivered to the user
In-App Purchase Stages



One thing to note about this breakdown is that this does not account for any iOS app to back-end API (which I’ll simply refer to as Laravel going forward) communication. For that…


Digging Deeper

The IAP process begins when the iOS app must present the user with the in-app products that can be purchased. In the SimpliFit app, this occurs once the user’s free trial ends, or their subscription needs to be renewed, and the user must purchase a subscription to continue using the app.

There are two possible methods to retrieve the products to display to the user: 1) hard-code the products into the app or 2) get a list of products from a server. The second option is highly recommended as it allows you to modify products and pricing without having to update the iOS app. The first option may be appropriate if you only have products that unlock functionality locally within the app and don’t need to be updated often.

Either way, you will need to embed the Store Kit framework into your app, which allows your app to communicate with Apple’s App Store. As mentioned above, the Cordova Purchase Plugin thankfully takes care of this for you and provides a simple API to interact with Store Kit.

The steps outlined below assume the “happy path” and are numbered according to the diagram further below:

  1. The iOS app requests list of products from Laravel
  2. Laravel returns the list of product identifiers currently available for purchase
  3. The iOS app sends these product identifiers to Store Kit
  4. Store Kit requests product information based on product identifiers
  5. The App Store returns production information (title, price, etc.)
  6. Store Kit returns the production information from the App Store
  7. The iOS app displays the products to the user
  8. The user selects a product to purchase
  9. The iOS app requests payment for product
  10. Store Kit prepares for a purchase by requesting the user’s Apple account password
  11. The user enters his/her password
  12. Store Kit sends the purchase request and password to the App Store
  13. The App Store processes the purchase and returns a purchase receipt
  14. Store Kit sends the receipt to the iOS app
  15. The iOS app sends the receipt data to Laravel for validation
  16. Laravel records the receipt data to create an audit trail
  17. Laravel sends the receipt data to the App Store to validate the purchase
  18. The App Store validates the receipt and returns a parsed receipt
  19. Laravel reads the App Store response and marks the purchase as valid
  20. Laravel unlocks the purchased content and notifies the iOS app

iOS In-App Purchase Flow

Notes on this process:

  • As mentioned above, steps 1 and 2 can be accomplished via hardcoding the product identifiers into the iOS app.
  • Steps 17-20 aren’t necessarily required but are highly recommended. This will prevent someone from sending a fake receipt to fool your app into delivering unpaid content.

Retrieving Product Information

Let’s start with the first stage. From the perspective of Laravel, this is simple, and since I don’t work with Cordova directly, I’ll be glossing over anything that doesn’t pertain to Laravel.

For the SimpliFit app, Laravel tracks a user’s subscription status. That starts with a one month free trial upon registration. I have a Subscription model that tracks the status of the subscription (trial, trial_ended, active, grace, lapsed, cancelled, or lifetime), when the status ends (e.g., when the trial ends), and an associated transaction_id (for active subscriptions, tied to the Transaction model).

On certain API calls from the iOS app, the subscription is checked against the current date. If it’s determined that a status has expired, a new inactive subscription entry is created. At this point, a specific error code is returned to the app, specifying if a trial just ended, a subscription lapsed, or the subscription was cancelled. The app displays a blocking modal populated with text sent in the error. The user can no longer use the app until he/she subscribes.

This is where we get into the IAP flow. Once the user acknowledges the message in the modal, the app requests the available products for purchase from Laravel. Here I have a Product model that holds product_uid, platform, price, billing_interval, trial_length, description, and active (denotes whether the product is currently active and is available to purchase). The platform enumeration exists as we plan to build in IAPs into our Android app, so we need a way of distinguishing which platform a specific product belongs to. The product_uid field is the identifier of the product as specified in iTunes Connect (where you as the developer create the products for purchase in the app). The description is used internally only to help distinguish products.

So Laravel grabs the active products for the given platform (which we specify using a Client-Platform header in all requests) and returns the product_uid of each product to the app. Here’s what that would look like:

As outlined in the flowchart, the app passes these product identifiers to Store Kit, which then retrieves the products and their associated information from the App Store (i.e., the products you created in iTunes Connect). Store Kit passes these products back to the app, which then displays the products to the user.

Requesting Payment

Now that the user has been presented with the products that can be purchased through the app, this stage only involved the app, Store Kit, and the App Store.

You can refer to the Cordova Purchase Plugin for details on how your Cordova app can communicate with the App Store through Store Kit, but as I outlined in the flowchart above, there’s isn’t much to it:

  • The app requests payment for a product,
  • Store Kit requires the user’s password to confirm the purchase, and
  • The payment request is sent to the App Store.

The next stage is where things start to get complicated.

Delivering Products

Assuming the user’s payment is processed successfully, the App Store will return a receipt to the app. The Cordova Purchase Plugin then provides the app with a JSON receipt that will look like this:

NOTE: This receipt structure is specific to the Cordova Purchase Plugin. If your app uses a different method to communicate with Store Kit, the receipt structure may differ.

Let’s break this down by parameter:

  • The “type” parameter specifies the type of purchase.
  • The “id” parameter is the transaction id for the purchase.
  • The “appStoreReceipt” parameter is a base64 encoded iOS 7-style receipt.
  • The “transactionReceipt” parameter is a base64 encoded iOS 6-style receipt. This receipt is technically deprecated by Apple, but it’s use is still allowed. I would avoid using this receipt, as Apple could decide to drop support for this receipt type at any point.


Side Project: Base64 Decode Receipt

Just for fun, try decoding the “appStoreReceipt” and “transactionReceipt” data you receive from you app. Use PHP’s base64_decode($string) function.

You’ll find another object in that encoded data, which contains “signature”, “purchase-info”, “environment”, “pod”, and “signing-status” parameters.

And if you base64 decode the “purchase-info” data, you’ll find the exact same data that you’ll receive back from the App Store when you try to verify a transaction.

So technically you have all the transaction data you need in the receipt from the app, you just don’t know if it’s valid or not.


So now that we have the receipt from the app, we need to validate the transaction with the App Store. As I mentioned at the beginning, Apple’s documentation is severely lacking with what exactly you need to send for validation.

Let’s take this step-by-step, but before we do, note that I use Laracast’s command pattern. The code below is from my StoreTransactionCommandHandler class, and I may be omitting code that isn’t necessary to the discussion or that I can’t share.

Store the receipt data

First I store the base64 encoded “appStoreReceipt” data. This is done not only to store the receipt data in case the receipt is deemed invalid by Apple, but also to allow re-validation of the receipt if it ever becomes necessary.

I use my TransactionRepositoryInterface $transactionRepo to store this receipt data and tie it to a user and platform:

Set the endpoint

Apple provides two URLs for validating receipts with the App Store:

  • Sandbox:
  • Production:

In Laravel, I have a function that checks the version of the front-end app. If the app is a development version (i.e., one that hasn’t been released to the public but is either being tested in a dev environment or is being used by an Apple reviewer), the endpoint the Laravel communicates with is the sandbox server. If the version is a production version then I direct validations to Apple’s production server.

Build the JSON receipt object

Apple’s documentation is thankfully clear regarding what is required in the receipt JSON. This function takes the full receipt received from the app and uses the “appStoreReceipt” data for the receipt object.

Apple’s documentation mentions a “password” parameter, which is only use for auto-renewing subscriptions. If you’re validating a receipt for an auto-reneweing receipt, go to iTunes Connect and get the hexadecimal shared secret for your app. This is the password you will send to the App Store.

Make the request

Now that we have the receipt object built and the endpoint set, we’re ready to communicate with the App Store. There are two ways you could do this: cURL or stream context. I had issues with getting a cURL implementation working, and I found the stream context method easier to work with. Here’s my code:

What we’re doing here is first setting the HTTP options for the stream context (more info on stream context HTTP options here): specify a POST request, set the content type for the request, and set the $receiptObject as the content. After that, we create the stream context resource.

With the file_get_contents function (documentation), we’re telling PHP to convert the contents of a file (in this case, Apple’s server URL) to a string using the given stream context.

As you can see, I have an error flow for if the result comes back as FALSE. If you read the documentation for file_get_contents, you’ll see that the returned values are the read data or FALSE on failure. So if the call to the App Store fails, I’ve added an error flow to notify the front-end app.

If all goes well, though, I decode the JSON data as an associative array. Here’s what that might look like:

NOTE: The validated receipt may contain multiple transactions in the “in_app” parameter. It seems that Apple keeps all of the user’s transactions in the receipt in chronological order. Assuming users can only purchase one product at a time in your app, you want to grab the last transaction in the “in_app” array.

The important parameters in this receipt are:

  • status – the outcome of Apple’s validation (see status codes below)
  • receipt.in_app.0.product_id – the product_uid purchased
  • receipt.in_app.0.transaction_id – the transaction identifier

These are the only parameters I use in my code, but feel free to look through the meanings of the other parameters in Apple’s documentation.

The status code is the most important parameter, though. This tells you the outcome of Apple’s validation. Here are the possible status codes and what they mean:

Status Description
0 The receipt provided is valid.
21000 The App Store could not read the JSON object you provided.
21002 The data in the receipt-data property was malformed.
21003 The receipt could not be authenticated.
21004 The shared secret you provided does not match the shared secret on file for your account.Only returned for iOS 6 style transaction receipts for auto-renewable subscriptions.
21005 The receipt server is not currently available.
21006 This receipt is valid but the subscription has expired. When this status code is returned to your server, the receipt data is also decoded and returned as part of the response.Only returned for iOS 6 style transaction receipts for auto-renewable subscriptions.
21007 This receipt is a sandbox receipt, but it was sent to the production server.
21008 This receipt is a production receipt, but it was sent to the sandbox server.

Validate the response

So now the we have a response back from the App Store, it’s time to validate it’s what we were expecting. I do two things here: 1) check if the status code is set and 2) check if the status code is non-zero.

Add the validated receipt and start subscription

For the SimpliFit app, we now know the receipt is valid and the user purchased a subscription. I store the transaction identifier from the receipt and mark the transaction as verified.

Depending on the product_uid in the receipt, I then add a subscription entry for the user. The user can now use the app for the specified period of time of the subscription!

Now I just respond OK to the front-end app, indicating the user is now a subscriber, and the app can direct the user back into the main flow of the app.

Closing Notes

For the SimpliFit iOS app, we are using non-renewing subscriptions, as we are not eligible for auto-renewing subscriptions according to Apple’s policies (only NewsStand and media apps can have auto-renewing subscriptions to the best of our understanding). In iTunes Connect, when you create a non-renewing subscription, you don’t specify a time length for the subscription. As soon as your iOS app transitions a purchase to “owned”, the subscription is available to purchase again by the user. It is up to you app and/or server to track the subscription duration.

Next Steps

So what’s next for SimpliFit? Android payments. From first glance, the overall flow is similar, with one crucial difference: Google’s payment verification API requires your server to be authenticated before making the payment API call. I’m still trying to determine the best method to deal with this, so keep an eye out for an Android-themed version of this post in the near future.



Laravel on AWS Elastic Beanstalk

How to use this guide

This guide will walk you through setting up a Laravel development environment on Elastic Beanstalk.

Before using Elastic Beanstalk, I was using a shared hosting account, and I got fed up with outdated packages and the lack of admin privileges. My goal with this guide was to create a dev server that closely mirrored my intended production environment in Elastic Beanstalk (another post on that coming soon). This is the exact setup I use for SimpliFit’s API dev server.

What is Elastic Beanstalk?

First, a quick overview. Amazon Web Services’ Elastic Beanstalk is a Platform as a Service (PaaS), allowing developers to deploy applications without the hassle of detailed server infrastructure, such as server provisioning or scaling to meet demand.

Elastic Beanstalk uses the following AWS products:

  • Elastic Compute Cloud (EC2)
  • Simple Storage Service (S3)
  • Simple Notification Service (SNS)
  • CloudWatch

Elastic Beanstalk can also manage an AWS Relational Database Service (RDS) instance, however, for a Laravel application, this is not a preferred solution. When an RDS instance is created by and associated with an Elastic Beanstalk environment, it will also be terminated (and thus all data lost) when that environment is terminated. This will be further discussed in a later section.


Prior to continuing with this guide, ensure the following prerequisites are met:

Create an RDS instance

For our Laravel application, we will be using a MySQL database, so we must first create a MySQL RDS instance in the RDS Management Console.

1. Select Engine

Select the mysql engine.

Select RDS Engine

2. AZ Deployment

Select, as required by your application, whether you want to use Multi-AZ Deployment for this database.

Since we’ll be using this Elastic Beanstalk application for development only (and we’d like to stay within the free tier usage), we’ll select No here. A post in a few weeks will cover this process for a production environment.

Mutli-AZ Deployment

3. Specify DB Details

Now you get to set the instance specifications and settings for your DB. For a development environment, we will use the following settings:

Specify DB Details

To determine the right settings for your Laravel application, it’s best to test various environments to see which fits your needs. As long as you only have only one RDS instance running at a time, you’ll fall within the free usage tier.

4. Advanced Settings

Unless you have created a separate VPC for your application, select the default VPC and default DB Subnet Group. Set this DB to be publicly accessible and select an Availability Zone. This is the AZ that you will want to also launch your Elastic Beanstalk EC2 instances in.

Finally, give your database a name and click Launch DB Instance. Launching the instance may take some time, but luckily we can complete the next steps during this time.

Configure Advanced Settings

Generate Security Credentials

Note: If you already have an Access Key ID and Secret Access Key for your account, skip this section.

In the Identity and Access Management (IAM) console, you will need to create new security credentials for your account.

Note: You need to have access rights to the IAM console. If you are not the owner/admin of the AWS account, you will need to request that security credentials are created for your account.

Navigate to the “Users” section in the left-hand menu. Select your username and then select the “Security Credentials” accordion. Click on “Manage Access Keys”.

In the lower right-hand corner, click “Create Access Key”.

Click “Show User Security Credentials” to view the credentials you just created. Keep this tab open, as this data will be needed in a later step. Alternatively, you can download your credentials as a .csv file to store until needed.

Create Key Pair

To give us the ability to SSH into the EC2 instances that are created as part of an Elastic Beanstalk environment, if the need arrises, we first need to create a key pair in the EC2 Management Console.

Select “Key Pairs” under “Network & Security” in the left-hand menu, and click “Create Key Pair”.

Create Key Pair

Give your Key Pair a name to describe it and click “Create”.

Your Key Pair is downloaded as a .pem file. You’ll need this file if you need to SSH into your EC2 instances, so don’t lose it!

Install the Eb CLI

The simplest method to interact with Elastic Beanstalk may be the GUI interface in the AWS Management Console, however the eb command line interface (CLI) offers many of the same features to deploy applications quickly and easily from your computer, especially when using a git workflow.

Download the eb tool and extract to a folder of your choosing on your computer. Add the following path to your PATH variable:

  • For Windows: <path to unzipped EB CLI package>/eb/windows
  • For Linux/Unix: <path to unzipped EB CLI package>/eb/linux/python2.7/

Initialize Elastic Beanstalk Application

Open command prompt/terminal and navigate to the root directory of your Laravel installation (and git repo).

Type the following command:

eb init

You will now be prompted to enter your AWS Access Key ID and AWS Secret Key, which were generated in a previous step. If you’ve already run eb init on this computer, your previous keys will already be pre-populated; just hit enter to select them.

Next you will choose the service region for you Elastic Beanstalk application. Choose the same region in which you created your RDS instance above. In this case, our RDS instance was created in “1) US East (Virginia)”.

Choose service region

Now give your application a name. The default value is the directory name. Then give your environment a name (e.g., development).

For your environment tier, select “1) WebServer::Standard::1.0”, as we are setting up a webserver.

Next, we get to select our solution stack. As of this writing, the latest version of the Amazon Linux AMI with PHP is v1.0.4 and runs PHP 5.5. We will be choosing that stack (#1).

Select solution stack

Since this will be a development application, we will be choosing a “SingleInstance” environment. This also ensure we’ll stay within the free usage tier for EC2, as long as we only have one running environment at a time. We created an RDS DB Instance separately, so we’ll answer “n” for no to the next question.

Next we need to attach an instance profile. This gives the EC2 instances that are created security permissions to access other AWS services (such as S3 for storing logs and and application versions). If you’ve created a profile already, choose that, otherwise select “1) [Create a default instance profile]”.

Choose instance profile

After a few seconds of waiting, you’re done!

So what just happened?

Congratulations, you created a new Elastic Beanstalk application, associated it with a git repo, and set some initial options for each environment that’s created. This includes which region to launch EC2 instances in, how many instances should be created, and what should be installed on those instances.

If you head over to the Elastic Beanstalk management console, you’ll see your application listed under “All Applications” (with no environments created, yet).

And if you navigate to your project directory in your code editor, you’ll see a new directory, “.elasticbeanstalk”. At the moment, this contains a config file with all the application preferences we just set.

You may also notice that your .gitignore file has changed. I wonder why… let’s take a look:


Look at that! The new eb directory has already been added to our .gitignore file. Awesome! The line breaks just need a bit of cleaning up and it’s good to go.

Modifying Application Options

Normally, once eb start (this command starts an environment within your EB application and is covered in a later section) is run, a new file is created in the .elasticbeanstalk directory: Since we want to set some of these options BEFORE an environment is started, we’ll create the file ourselves. This file contains all the options for Elastic Beanstalk, including options for creating new EC2 instances (and RDS instances). Refer to AWS Elastic Beanstalk Developer Guide – Option Values page for descriptions of each part of the code below. The code below is for an application that does not include an RDS instance (i.e., the RDS instance is created manually).

Custom Availability Zones=us-east-1a




Application Healthcheck URL=




Automatically Terminate Unhealthy Instances=true

Notification Endpoint=
Notification Protocol=email

First we start off with auto-scaling options. Since we’re running a single instance environment, MaxSize and MinSize are both 1. This means that EB will ensure you always have 1 EC2 instance running at all times (e.g. creating a new instance if the existing one goes down). We’re also specifying the Availability Zone we want our EC2 instances to be created in. EC2 instances in the same AZ as your RDS instance will cost less and connections between them will be faster. Set this option to the AZ in which you created your RDS instance.

The next option we’re editing is the EC2KeyName, which is the name of the key pair we created several sections ago. This will allow you to SSH into your EC2 instance(s) if the need arises (though you likely won’t need to).

The aws:elasticbeanstalk:application:environment contains your environment variables, which are added by your config files. You don’t need to edit this section, and even if you did, this doesn’t seem to actually set the environment variables (it just displays them).

Finally, we’re going to modify a few options under aws:elasticbeanstalk:container:php:phpini. We will set composer_options to –no-dev, so that dev add-ons aren’t installed when composer install is run. Last, we’ll set document_root to /public so that it points to Laravel’s public folder.

Add Environment Config Files

Located in the .ebextensions directory in the root of the project, the config files (*.config) contain commands for the environment to run and options to set. These config files run every time git aws.push is run (i.e., the environment is updated or a new EC2 instance within the environment is started), and they are run in alphabetical order. These config files SHOULD NOT be in your .gitignore file.

To start, we will create three files: 00environmentVariables.config, 01composer.config,  and 02artisan.config.

Additional resources:

Environment Variables

In the 00environmentVariables.config file, we will place all of the instructions for the application to modify the environment’s options, in this case to create environment variables (e.g., DB_HOST). Add the following code to this file:

   - namespace: aws:elasticbeanstalk:application:environment
     option_name: DB_HOST
   - option_name: DB_PORT
     value: 3306
   - option_name: DB_NAME
     value: dbname
   - option_name: DB_USER
     value: username
   - option_name: DB_PASS
     value: password

Here, namespace refers to the specific groups of options in .elasticbeanstalk/ Using the namespace elasticbeanstalk:application:environment we are stating that the options and their values below are for that namespace.

DB_HOST will be the endpoint shown in the RDS dashboard for the RDS instance set up earlier.
DB_PORT is usually 3306, unless changed during the RDS instance setup.
DB_NAME is the name of the database within the RDS instance (not the RDS instance name).
DB_USER is the username that was created during the RDS setup process.
DB_PASS is the username’s password.

Here is where you will add other environment variables, such as a MailChimp API key, an SQS host, etc.

Note: These environment variables can also be set manually in the EB Software Confirguration panel. If you do not want to have your DB credentials in your git repo, then you would manually set these variables.

Composer Commands

In the 01composer.config file, we will place all the composer commands to be run when a new instance is created or an existing instance is updated. Add the following code to this file:

      command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update

   - namespace: aws:elasticbeanstalk:application:environment
     option_name: COMPOSER_HOME
     value: /root

      command: "/usr/bin/composer.phar dump-autoload --optimize"

First, commands are executed, which are run before the application and web server are set up. Here we self-update composer.phar to ensure the latest version is running on the instance.

Next we set a COMPOSER_HOME environment variable.

Last, container commands are executed, which are for the environment’s app container. These are run after the application and web server have been set up, and these commands have access to environment variables. Here we run composer optimize.

Note: EB will automatically run composer.phar install if it sees a composer.json file in the root directory AND does not find a vendor folder in the root directory. If your vendor folder is not in .gitignore, you will need to add composer.phar install to this file yourself.

Artisan Commands

In the 02artisan.config file, we specify container commands to run migrations and seeding.

These commands should ideally be run only once, or if you are adding/modifying tables. I also tend to just migrate:refresh the database every now and then on the development server, as error tend to compound themselves and exceptions start cropping up in my app.

      command: "php artisan migrate --force"
      command: "php artisan db:seed --force"

Here we migrate to create the new database (including migrating the Auth Token package) and seed the database.

You’ll notice that the migrate and db:seed command were separated. Why not just run migrate –seed? In Laravel 4.2, this seems to cause an error when run on Elastic Beanstalk. Separating the two commands allows the environment to set up properly without errors when using the –force flag.

Note: The –force option needs to be used here otherwise the CLI will ask for confirmation to run each command and your commands will timeout.


Add Environment Variables to Database.php

Now that we have the config files set to create the database environment variables, we need to tell Laravel to use those for production.

Edit your database.php file so that your mysql connection parameters are set as below:

		'mysql' => array(
			'driver'    => 'mysql',
			'host'      => $_ENV['DB_HOST'],
			‘port’	=> $_ENV['DB_PORT’],
			'database'  => $_ENV['DB_NAME'],
			'username'  => $_ENV['DB_USER'],
			'password'  => $_ENV['DB_PASS'],
			'charset'   => 'utf8',
			'collation' => 'utf8_unicode_ci',
			'prefix'    => '',

If you configured your RDS instance with your EB app (not recommended), then you could also set your connection options to:

		'mysql' => array(
			'driver'    => 'mysql',
			'host'      => $_SERVER['RDS_HOSTNAME'],
			‘port’	=> $_SERVER['RDS_PORT'],
			'database'  => $_SERVER['RDS_DB_NAME'],
			'username'  => $_SERVER['RDS_USERNAME'],
			'password'  => $_SERVER['RDS_PASSWORD'],
			'charset'   => 'utf8',
			'collation' => 'utf8_unicode_ci',
			'prefix'    => '',

Note: If you are using a .env.*.php file to specify local database connection parameters, remember to add that file to your .gitignore.

If you do have a .env.local.php file, it would look something like this:

<?php return array( 	'DB_HOST' => 'hostname',
	'DB_PORT' => 'port',
	'DB_NAME' => 'dbname',
	'DB_USER' => 'username'
	'DB_PASS' => 'password',

Git Commit

With all of these changes now complete, commit these changes to your git repo.

Add Security Group Inbound Rule

The last thing left to do before starting up your environment is to give the Elastic Beanstalk application access to the MySQL RDS instance.

Head to the AWS EC2 management console and click on “Security Groups” under “Network & Security”. Here you should see two security groups: the default security group and a security group created for you Elastic Beanstalk application.

Click on the Elastic Beanstalk security group (for us, it’s called development, just like our environment) and copy the Group ID.

Security Groups

Right click on the default security group and select “Edit Inbound Rules”.

Here we need to add a MySQL type rule with a Custom IP equal to the Elastic Beanstalk security group’s Group ID. While you’re here, add a MySQL type rule for your IP [Tip: in the Source drop-down, you can simply select My IP].

Inbound Rules

Start the EB environment

Run eb start in the command line/terminal. Choose “no” if you are asked whether you want to use the latest commit for the environment.

AWS will now set up all the resources necessary for you environment. This may take some some time. Once complete, it will give you a URL at which you can access your server.

You can also view the status of your environment setup in the Elastic Beanstalk management console. Here you’ll see that’s it’s running Amazon’s Sample Application, but if you try to visit the URL you won’t receive a response. This is because the server is pointed to the /public folder where it would find Laravel, but you haven’t pushed your Laravel app onto the environment yet.

Run eb status to see the status of your environment (you can also see your environment’s status in the management console). If it’s green, move on to the next section.

Git Push

At this point, your environment is ready to update with your Laravel application.

Run git aws.push. After a few moments, it will upload your git repo to the environment and begin updating the environment.

If you run eb status at this point (or visit the management console), you’ll see the environment is still updating. I find it’s best to view the environment status via the management console, as you can see a running list of events below the status.

If an error occurs during the update, the event will list the command that caused the error (usually it’s an artisan command for me). Go to the Logs option in the left-hand menu and click the Snapshot Logs button. Once a log snapshot is available, click the “View log file” link to view the latest logs. Here you can investigate why an error occured. Just search for the command name that cause the error to occur.

Snapshot Logs


And that’s it! Navigate to the URL for your environment (e.g., to see your Laravel application live.

Deleting your Environment

Since we’ve added this environment’s Security Group ID to the inbound rules of the default Security Group, we need to first remove that rule in the EC2 Management Console.

After that rule is removed, deleting the environment is a simple command line/terminal command:

eb stop

Alternatively, you can terminate the environment from the management console from within your environment.

This may take some time as all the AWS resources created for your environment need to be deleted. You can monitor progress from the command line/terminal or from the Elastic Beanstalk Management Console.

Deleting your Application

To delete your application, ensure all environments have been successfully terminated. If there are un-terminated environments, check the event log for errors in that environment during the termination process.

Now run the command:

eb delete

A few seconds later you application will be deleted. This can also be done from the management console.

As I mentioned in my first post, when I started using Laravel I knew nothing about the concept of MVC. It was difficult to transition from writing pure PHP (which I had only learned 6 months prior to working with Laravel) to an MVC framework. There a lot of great resources for MVC noobs —tut+’s MVC for Noobs is one—and introductions to Laravel—Laracast’s free Laravel From Scratch series is one, Laravel Book’s Architecture of Laravel Applications ( is another—but let’s quickly recap the basic concepts.

The MVC Pattern

At its core, the MVC architectural pattern exists to help with the Separation of Concerns ( in your code.  The MVC pattern consists of:

  • Models: represent stored data and enforce “business” rules/logic on the data (in Laravel, a model is analogous to a table in your database)
  • Views: present data to the user
  • Controllers: mediate between the View and the Model

In Laravel 4, the app directory has folders for controllers, models, and views. You would expect the typical flow of information would go like this:

  1. A route is invoked
  2. The route calls a function within a controller
  3. The controller uses a model to access data
  4. The controller passes that data to a view
  5. The view displays the data to the user

And in it’s most simplest form, that’s exactly how the Laravel framework works. Looking at the files that are included in a new Laravel install (at least in Laravel 4.x, this is changing with Laravel 5.x), this made sense. And so I moved forward believing that everything must fall into a model, a view, or a controller.

My Controllers Needs A Diet

The first feature I created for the SimpliFit beta API was simple: show all the habits a user has learned since they started with SimpliFit. We called this feature Achievements. “Easy enough,” I thought as I created a few models, coded the relationships, and seeded the database. In the end, my one controller function had ballooned to over 100 lines of code while my models were at about 30. I tested the feature and it worked.

Then I heard the saying “skinny controller, fat model”. Reality check time: my controller was pretty fat. Since I was coding an API, we didn’t have any views in Laravel for this feature. That just left the model, and so I thought all the business logic I need must go into the model.

As I coded more features, I ensured that my controllers only passed data between the user on the front end and the database, and I moved all the logic into the models. Even with this, I still have controllers that are 400+ lines of code (and only two functions), but my models are beastly—I have several models that are over 2,000 lines of code!

This is obviously a maintainability nightmare. With 49 models and 20 controllers in total and thousands of lines of code, I knew there had to be a better way of organizing my code. The standard MVC principles weren’t cutting it for me.

Repositories, Interfaces, and Commands, oh my!

When I was in the midst of coding SimpliFit beta v1 in June and July, I thought I had a good grasp on Laravel. I was able to code features and they worked as intended. At the time, I wasn’t concerned with coding to best practices, I just needed something that worked.

This is about the same time I subscribed to (well worth the $9 per month) and started learning there was so much more to Laravel than models, views, and controllers. I’m still trying to wrap my mind around all the concepts, but now I know that I can employ various types of coding patterns and ideas (beyond models, views, and controllers):

  • repositories,
  • interfaces,
  • service providers,
  • commands,
  • events,
  • presenters,
  • entities, and
  • more I probably haven’t learned about yet.

A word of caution

A lot of beginner Laravel sources tend to oversimplify MVC concepts. Some sources even have you put code into your routes.php file, which, although won’t break anything in Laravel, it does go counter to the Separation of Concerns principle. When I first started, I thought I had to stick to models, views, or controllers. I didn’t know that there were more options.

If you’re an MVC noob like I was when I started with Laravel, if you take anything away from this article, let it be this: don’t constrain your code to just models, views, and controllers—learn about the other options that exist for organizing your code. Even if you don’t adopt those practices at first, you need to know about them so you can make more educated decisions about your app structure.

If I could back to when I first started with Laravel, would I force myself to adopt these other coding patterns and ideas? Maybe. For us, getting our app out to users for testing was our first priority, and the faster we could achieve that, the better. But if I had known about my options, I may have at least coded in a way that would have allowed me to adopt these patterns down the road more easily.


Getting Set Up For Laravel

There are a ton of articles about how to set up Laravel and the author’s tools of choice, and I debated whether or not the Laravel community needs another. But seeing as I’ll be writing a lot of articles about Laravel and there may be people who, like me, have never used an MVC framework or Git before, I figured I’ll just write up a quick post. First up…

Setting up Git

If you were like me when you started coding, whenever you created a new version of a file, you’d save the old file with a date or previous version number tacked onto the file name. This quickly got out of hand and managing different versions of files became a huge headache for our alpha. This is where Git can help.

Continue reading

Laravel - the PHP framework for artisans.It was February 2nd, and I was heading to a Super Bowl/birthday party. I had quit my job only a month before to work full-time on my startup, SimpliFit (then called TrackFaster) and was coding the hobbled-together backend in PHP for our third Alpha version (more on Alpha v0.3 and lessons learned from that some other time).

At the party, a mutual friend introduced me to Jonathan Stassen, and I gave him the usual pitch. He just so happened to also be working at a startup and was learning a PHP framework called Laravel.

Continue reading