Continuous Integration with GitHub, SFDX and CircleCI… Easier than you think!

This post is a follow up/companion to the talk I did at Dreamforce 2018. If you didn’t get to see it in person, you can check out the slides here, and I will update this post when the recording becomes available, but for now, read on.

In the salesforce ecosystem, the traditional way of moving code from development environments (sandboxes, etc) to production has been either change sets, or the force.com migration tool (ANT). Neither method is perfect, but until recently they were all we had. Change sets are an easy, but time intensive process. They can be created in the user interface and are well within the reach of most people. The force.com migration tool was arguable more powerful (CLI based tool, able to be scripted, etc) but a lot more difficult to use. Neither tool was particularly well suited to an agile environment that required continuous integration or delivery.

So what do I mean when I say continuous integration? I am referring to both the development practice and the tooling required to facilitate it.

A good explanation of CI, taken from Microsoft’s Azure docs is as follows: “Continuous Integration (CI) is the process of automating the build and testing of code every time a team member commits changes to version control. CI encourages developers to share their code and unit tests by merging their changes into a shared version control repository after every small task completion. Committing code triggers an automated build system to grab the latest code from the shared repository and to build, test, and validate the full master branch (also known as the trunk or main).”

This has traditionally been difficult with salesforce, because we lacked the tooling to do it effectively. In the old world, salesforce development took an ‘org-centric’ view of the world, with your production org serving as the ‘source of truth’ and sandboxes containing work in progress. This ‘org-centric’ model has a number of problems (e.g prod can be changed by anyone, developing dependent features in separate environments, merge conflicts, etc).

Since the advent of Salesforce DX (SFDX) we have been handed the tools to move towards a ‘source-centric’ world, with our source/version control system (e.g Git) becoming the source of truth, and our scratch org’s essentially becoming ‘runtimes’ rather than org’s in their own right. Scratch orgs are ephemeral things that can be created and destroyed at will, only needing to live as long as the development cycle for whatever feature you are working on, or as long as it takes for code to be pushed to them and tested (if used in a CI pipeline)

Because of SFDX, we now get access to all of the power of modern source/version control systems, such as Git. Git providers like GitHub give us powerful user interfaces, the ability to perform code reviews (pull requests) with ease, the ability to track every single change with information like who changed the file, what was changed and when. As code is versioned we also now gain the ability to revert to previous versions of the source making it much easier to recover if something goes wrong. Along side this, we know that our code is stored safely outside of salesforce and we can set up access control to prevent code being overwritten by un-authorised users.

Another huge advantage we gain from source/version control is the ability to create branches of our source code. For example; you may have your ‘production’ ready application in the 'master' branch, with the version you are currently working on in the 'develop' branch (think of your UAT/pre-prod environment). Whenever you work on a new feature, you can make a copy of your 'master' branch in to its own new branch (e.g'feature/new-feature') and do the work there. Once you are happy with it, a pull request can be made to merge it in to develop for testing. Once this has been completed and all of the code in 'develop' is ready for release, this can then be merged in to 'master' for your release.

Source/version control is only half of the equation. It is all well and good to have your code in git, and this itself is valuable, but the real power comes from automation and continuous integration. When we have CI setup, every time we make a commit to our feature branch ('feature/new-feature') we are pulling it from git, pushing it to a scratch org and running all of our tests. This lets us know very quickly if a) our code even deploys and b) if we’ve broken any tests. We also use a scratch org ‘locally’ for running and testing our code on our local machine (of-course the scratch org actually runs in the cloud) in a similar way we would have used a developer sandbox in the past.

Once our code is ready for the next environment (e.g UAT) we can have our CI setup automatically push our code to our UAT sandbox whenever a commit is made to develop, finally, once we are happy and ready to move to our production org, we can then have our CI deploy to production upon a merge into the 'master' branch.

Example of CI Development Process

So lets talk about how we actually achieve this with salesforce. In this example I will be using GitHub as my source/version control system (other alternatives are BitBucket, GitLab) and CircleCI as my CI automation tool (other alternatives are TravisCI, Jenkins). The ‘glue’ that ties all of this together is SFDX.

I’ve created a GitHub repository with everything you need to get started, I would suggest you clone it from here as instructions below reference scripts from it. If you want to find out what the scripts are doing, simply open them in a text editor. These instructions require a *nix like environment (e.g macOS, Linux, Bash on windows) and have been tested on macOS only.

NOTE: The first time this build runs it will deploy whatever is in your force-app/ folder to production. As with anything you do, be sure to try this in a developer edition org, or a sandbox first before using this with your production org!

In this example there is only a simply ‘Hello World’ apex class, test and lightning component. You should remove these before you begin and replace with your own source code. Salesforce provide steps here on how to migrate your existing code to SFDX here, After you’ve cloned the repository to your machine you should follow the instructions here (inside the folder you clone to).

Now that we have our code ready to use with CI, lets get going.

Lets tackle authentication first. To authenticate to our production org and to create scratch orgs we are using JWT to do this.

  1. You first need to create a certificate and key to authenticate with. To do this you can run the script in build/generate-keys.sh
    • Follow the prompts when creating the certificate files
    • Take note of the Base64 output (big long chunk of text), as you will need this to set up CircleCI later

      Output from the key generation script
  2. You will need to create a connected app in your production (and any sandboxes you wish to use CI with)
    • First, from Setup, enter App in the Quick Find box, then select App Manager. Click New Connected App.
    • Give your application a name such as ‘CircleCI’
    • Make sure you check Enable OAuth Settings in the connected app
    • Set the OAuth callback to http://localhost:1717/OauthRedirect
    • Check Use Digital Signatures and add your certificate file (server.crt), this will be in the build/ folder. Once you have done this delete this file
    • Select the required OAuth scopes
    • Make sure that refresh is enabled – otherwise you’ll get this error: user hasn't approved this consumer
    • Ensure that Admin approved users are pre-authorized under Permitted Users is selected
    • Ensure that you allow the System Administrator profile is selected under the Profiles related list
    • Take note of the Consumer Key as you will need it for to setup CircleCI
Connected App Settings

Now that we have authentication setup, we can configure CircleCI. I have provided a basic config.yml file, this is already within the .circleci/ directory along with some shell scripts for circle to use for deployment and validation. Circle has extensive documentation on these config files here and stay tuned for a future post covering this area in more detail.

  1. You now can set up your CircleCI build
    • Ensure you have connected your GitHub account to CircleCI, to do this go https://www.circleci.com, click 'Signup' and then'Signup with GitHub'
      Adding a project in CircleCI

       

    • Once logged in, click on Add Projects choose your GitHub to use repository and click Set Up Project then click Start building there is an example config.yml in this repository already. You can edit this to suit your needs.
    • Cancel the first build, as it will fail without any environment variables set
    • Click the gear icon next to the repository name on the left hand side of the screen
    • In the settings screen, choose Environment Variables you will need to add three variables by clicking Add Variable
      • SFDC_SERVER_KEY is the Base64 output generated in Step 1
      • SFDC_PROD_CLIENTID is the Consumer Key from Step 2
      • SFDC_PROD_USER is the username to use with CircleCI (This should be an Integration user, with the System Administrator profile)
Setting Environment Variables in CircleCI
    • You can now re-run the first build.

Once you have all of this configured and working, you can use the CI build process from here on our, and hopefully never have to worry about a damn change-set again!

Now, every time you push to any branch other than 'master' a scratch org will be created, your code deployed to it and all tests run. If you then merge in to 'master' a production build will be run, validating and deploying your code.

Remember, this is just an introduction. In future posts I will explain in further detail the scripts and config files used in this repository, so you can customise them to suit your exact requirements.

A RetroPie (or similar) controller for £5?!

I recently found myself in Poundland seeing what a humble pound coin could get me, aside from the usual cables, chargers and similar accessories I buy… seriously, they work fine and are only £1/£2, not to mention their chargers are far better than cheap ones you’d find on ebay! Check out this video from bigclivedotcom on the subject.

As I was browsing, I happened upon the £5(!) electronics/games section. There were a few XBOX360 games and such, but what caught my eye was this;

I forgot to take a photo at the shop

It is a Gioteck ‘Turbo Controller’ for the Nintendo Classic Mini. It looks basically like a NES controller with turbo buttons. Considering my RetroPie setup at home, but having no idea of the protocol/connector it used, I decided it was worth the sacrifice of £5 to find out if I could make it work. You can also get these controllers from the likes of argos/ebay (for £5.99, the horror!).

I got it home, opened it up and saw the Nintendo ‘nunchuck‘ style plug on the (surprisingly long) cable. This was a good start and I figured that it probably uses the same protocol as the Nunchuck or the Wii Classic Controllers and similar that use the same plug. Both of these use the I2C protocol and there are various libraries out there to allow them to be use with Arduino’s and compatible micro-controllers.

I was hoping there would be something similar for the Raspberry Pi, given it has an I2C bus built in, but unfortunately the only information I could find was on drivers for the Wii controller with a Nunchuck or Classic Controller connected to it, connected to the Raspberry Pi over bluetooth, which was no use to me as I don’t own a Wii controller.

So I decided to write my own ‘driver’ for it (more of a daemon actually!) and here is how I did it;

First thing I had to do was crack it open to see if I could find the pinout. Mercifully it was printed right on the board, along with several test points I plan to investigate later. I2C devices generally use four wires VIN (Power, 3.3v) GND (Ground), SDA (Data) and SCL/CLK (Clock).

In this case, VIN is red, GND is black, SDA is green and CLK is white.

Controller PCB with connections labeled

Given that this experiment was so cheap, I simply cut off the nunchuck style plug to expose the wires and I then attached my own pin sockets/plug for easy connection to a Raspberry Pi or other devices.

Controller with new Raspberry Pi compatible plug

Adding the plug made it very easy to connect and disconnect it from the various raspberry Pi’s I used for testing, namely a Raspberry Pi 3 I use to run RetroPie in my lounge room, and a Raspberry Pi 0W that I used for headless testing/development.

Connected to the Raspberry Pi

If you didn’t want cut it up, you can get you could grab a ‘Nunchucky‘ from adafruit and solder wires and an appropriate plug to that, or scavenge sockets from a broken system.

Once I had the new plug on it, I connected it to an Arduino nano to do some testing. I initially tried the WiiClassicController library to see if it used the same protocol as the Nunchuck/Classic Controller and luckily for me, it did. So now I had to work out a way to get that data into a useable form on the Raspberry Pi using its I2C bus.

Ideally you would write a kernel module in C for this, but given my very limited knowledge of C and desire to get it running quickly I had to pick something else. I am most comfortable with Java so my my first attempt was to write a simple app that used the PI4J and Robot libraries to take the data from the I2C bus and turn it in to keyboard commands. This was very quick and easy to write, but unfortunately was a failure as Robot on linux requires X11 to be running for it to work, and RetroPie does not use X11.

I looked around, and a good way to achieve keyboard emulation at a lower level was with the ioctl call, and there happens to be an wrapper for it in NodeJS. I am not brilliant with JS but I have written node app’s before and figured it was going to be easier than learning C (which I do want to do at some stage!)

My first attempt was using the virtual-input library, but nothing I did would make it work with the Raspberry Pi. I could get it work fine on an ubuntu VM to send keystrokes, but never on the Pi.I saw that it was used in another project, node-virtual-gamepad which is a really cool project. So I tried it and it and worked fine on the Pi.

I then had a look through the source to see if I could extract its virtual keyboard code for use in my own project and after much wrangling, I got it to work! I used evtest to detect the virtual keyboard codes as they were sent by the virtual keyboard code.

evtest running

The next thing to do was integrate the keyboard code with the I2C library to come up with some sort of daemon that would interpret the commands sent from the controller over I2C into keypresses on the virtual keyboard, thus controlling the game.

There was also code for emulating joysticks/gamepads which I do plan to build in to the daemon, so that you can choose to emulate a keyboard or gamepad depending on your needs. But the first order of business was to get it working as a virtual keyboard.

Once I had both portions working, both I2C reading and virtual keyboard, i was able to combine them to build the daemon that will run in the background and interpret the data from the controller in to keyboard presses to control the Raspberry Pi.

Testing it out with a bit of Mario

The code and is available on my github here, along with instructions on how to setup and use it. If you want more detail on how I built it, read on.

Once I had both the virtual keyboard and I2C code working combining them was relatively straightforward, but there were a few gotchas.

  1. As I learned from the Arduino library, the gamepad sends data in ‘packets’ of 6 bytes
  2. When there is no buttons pressed, the result always begins with a 0x0 (0) with the packet looking like this (decimal);
    [ 0, 0, 128, 128, 255, 255 ]
  3. The gamepad sends a ‘heartbeat’ packet of 6x 0xFF (255) byte values every ~8 seconds and a randomly times packet that begins with 0x1 (1), these look like this (decimal);
    [ 1, 0, 164, 32, 1, 1 ]
    [ 255, 255, 255, 255, 255, 255 ]
  4. In the linux event subsystem when a key is pressed a 1 is sent and it will remain pressed until a 0 is sent for the same key, you can send multiple 1’s and 0’s at once
  5. All 8 buttons are handed by the last two bytes in the array (5 and 6) and some buttons when pressed together send a new code if they are on the same byte. I had to test and map these out.
  6. I needed to ensure that 2 buttons can be pressed at a time in order for the controller to be useful

Below is a table of the keys to their ‘bytes’ that I am using to detect keypresses;

Button Position Hex Dec
D-pad Up Byte 5 0xFE 254
D-pad Down Byte 4 0xBF 191
D-pad Left Byte 5 0xF3 253
D-pad Right Byte 4 0x7F 127
Start Byte 4 0xEF 239
Select Byte 4 0xFB 251
A Byte 5 0xEF 239
B Byte 5 0xBF 191

Given that some buttons share the same byte (such as A&B) they give different results if pressed at the same time. Below is a table of the ‘Combination’ bytes and positions;

Combination Position Hex Dec
A & D-pad Up Byte 5  0xEE 238
B & D-pad Up Byte 5 0xBE 190
Select & Start Byte 4 0xEB 235
A & D-pad Left Byte 5 0xED 237
B & D-pad Left Byte 5 0xBD 189
D-pad Up & D-pad Left Byte 5 0xFC 252
D-pad Down & D-pad Right Byte 4 0x3F 63
 D-pad Down & Start Byte 4 0xBB 187
 D-pad Down & Select Byte 4 0xAF 175
 D-pad Right & Select Byte 4 0x6F 111
 D-pad Right & Start Byte 4 0x7B 123
 A & B Byte 5 0xAF 175

Once I had this information, the code itself is fairly simple.

It polls the controller every 10ms (this can be changed) for the 6 byte array. From that I build JSON object containing each button and its state (0 or 1). I then check this against the last iteration to see if its changed to detect a change in state of a button, if its changed I then set the key high or low using the virtual keyboard library, at the end of the iteration i pass the current button states in to the ‘old’ iteration variable and start again. Only if the key has changed from one iteration to the next do I send a key event to change its state in the events subsystem.

The daemon is designed to be run in the background upon boot of the system to register events from the controller and pass them to the virtual keyboard. I also noted that the controller can be connected and disconnected while the daemon is running with no ill effects.

Let me know if you found this useful or interesting, or if you have any suggestions on improving it!

SchemaPuker v0.2 Released!

Try the new version right now at https://schemapuker.herokuapp.com/ 

I have been getting a lot of feedback about SchemaPuker since its launch, and many, many people have tried it out
The response has been far more than I expected, with many tweets and even a couple of blog posts about the tool;

Lucidchart + SchemaPuker: The Winning Combination for a Salesforce Consultant
Phil’s Salesforce Tip of the Week #220

I am so glad people are finding the tool useful, I’ve had a few feature requests and bug reports, which is why I have now released a new version, with the following changes;

  • You can now select if you want all fields displayed, or only relationship fields
  • Much better error handling!
    • Before, if something went wrong, you’d either get an ugly error page, or nothing at all, now you will get some (hopefully) useful details if something goes wrong
  • Huge speed increase, up to 5.9x faster in my super scientific benchmark*
  • All relationships should now be visible, some users were reporting that the lines connecting them didn’t show in lucidchart
    • I threw my entire dev org at it, and was able to see all the relationship lines automatically, if you are still experiencing this issue please let me know!
  • Minor text fixes

I have had suggestions for more new features, which I do plan to include in future releases, so please keep them coming!

If you have any suggestions, comments, bugs or need help you can send me a tweet, leave a comment, or send me a message!

* Super scientifc benchmark method: timing the old and new method several times and working out the average difference

SchemaPuker: ERDs made easy

SchemaPuker can be accessed here: https://schemapuker.herokuapp.com/

Read on for more information about SchemaPuker!

Often, we need to produce diagrams of our organisation’s data model (aka. ERDs). This will be especially true for those of us who are consultants.

Perhaps you are doing a discovery or analysis and need a a copy of the current data model, or maybe you need a ‘current state’ and a ‘to be’ for comparison, or you are designing new functionality that connects with an existing data model, or documenting functionality after completion.

Now, salesforce does have a tool to visualise the data model, called Schema Builder, however this cannot export the model, nor can it be customised without actually changing the data model itself.

To solve this problem, I came up with… SchemaPuker! (thanks to David Carroll for the name! and to David Everitt for the idea in the first place!) For more about how it came to be, and the name click here

But for now, SchemaPuker is a fairly simple tool, It allows you to authorise to salesforce, get a list of your objects and export them as a PostgreSQL schema file. This file can be imported in to Lucidchart (and other tools) in order to generate an editable ERD.

The tool itself is very simple to use, first, navigate to https://schemapuker.herokuapp.com, choose if you are using a Production/Developer Org or a Sandbox and click ‘Login’. You will then be asked to enter your salesforce credentials and to authorise SchemaPuker to access your org.

Screen Shot 2016-09-01 at 16.36.36

Once authorised, you will be given a list of objects inside your salesforce org. You then select the objects you wish to be in your ERD by holding down command (or crtl on windows/linux) and clicking, or by typing the API names in the ‘Selected Objects’ box

sp2

Once you click submit, you are given the PostgreSQL Schema. You can either copy/paste this into lucid chard, or click the ‘Download’ button below the output.

sp3

Next, log in to Lucidchart and create a new drawing, click ‘More Shapes’ at the bottom and then tick ‘Entity Relationship’ and press ‘Save’

lucid1

Now, you can either import the downloaded file from SchemaPuker by pressing ‘Choose File’, or paste the output in to the box below. You can ignore steps one and two in the import window.

lucid2

You will now see your salesforce objects in the sidebar just under the ‘Entity Relationship’ panel. You can drag the objects on and the relationships between the objects will be automatically created.

lucid3

You can click add new shapes from the ‘Entity Relationship’ panel to extend your ERD as required.

Thats it! Please try it out and let me know how you go!

Please Note: This is still very much beta, and is ‘minimum viable product’. However I am working to improve it on a regular basis, and would love to hear your thoughts.
It is limited to ~30 objects per export and may crash in fun and exciting ways. The app does *not* store any data, nor does it make *any* changes to your salesforce org.

Kittenforce! aka. telling your users when your instance is down for maintenance

The other day, Scott (check out his blog here) and I were at work chatting about the security trailhead superbadge (specifically, my domain). When you have a custom domain for your salesforce instance, you can customise your login page (or replace it entirely).

I then decided that a would make the login page far better, and hence;

After this, I went to login to a sandbox to do some actual work, only to be greeted with the ‘Please check your username and password. If you still can’t log in, contact your Salesforce administrator.’ message.

I was fairly sure I hadn’t forgotten my password, so I tried it again… nope. same thing.

What I had forgotten, was the fact that the daily deployment to that environment was happening, and as such all users except for the DevOps team were frozen out.

Which got me thinking… If I can put kittens on the login page, then why not some useful information too.

So, that evening I built this;

The concept is fairly simple, when you put an environment into ‘Maintenance’ mode (e.g during a deployment, etc) it freezes all users, excluding a defined list (e.g the DevOps team, system admins) and changes the login page to show a message informing the users of this.

When you are finished and disable maintenance mode, it will unfreeze all users and change the login page message back.

It uses a custom object to store a list of users who were frozen before the environment entered maintenance mode to ensure they stay frozen once the environment is changed back to normal mode.

The actual page itself is hosted from a force.com site, and is configured via a custom setting and custom metadata, which includes allowing them to be override by other pages.

If you would like to try this in your org, click here for the unmanaged package

For installation instructions, see this post.

I would love to hear any feedback you have, feel free to comment below.