Synchronising GitHub and an Internal Git server

Note: I found this mini How-To while having a clean-up of my GitHub repositories. I figured it would be worth sharing on my blog. Hopefully it is of use to someone. If you want to play around with the steps, but don’t want to use one of your existing projects, you can use this repository.


The Problem

  1. I have my repository hosted on GitHub
  2. I have an internal Git server used for deployments
  3. I want to keep these synchronised using my normal workflow

Getting Started

Both methods I’ll describe need a “bare” version of the GitHub repository on your internal server. This worked best for me:

[code lang=bash]
cd ~/projects/repo-sync-test/
scp -r .git [email protected]:/path/to/sync.git
[/code]

Here, I’m changing to my local working directory, then using scp to copy the .git folder to the internal server over ssh.

More information and examples this can be found in the online Git Book:

4.2 Git on the Server – Getting Git on a Server

Once the internal server version of the repository is ready, we can begin!

The Easy, Safe, But Manual Method:

[code lang=text]
+———+ +———-+ /——>
| GitHub | | internal | — deploy –>
+———+ +———-+ \——>
^ ^
| |
| +———+ |
\—–| ME! | —-/
+———+
[/code]

This one I have used before, and is the least complex. It needs the least setup, but doesn’t sync the two repositories automatically. Essentially we are going to add a second Git Remote to the local copy, and push to both servers in our workflow:

In your own local copy of the repository, checked out from GitHub, add a new remote a bit like this:

[code lang=bash]
git remote add internal [email protected]:/path/to/sync.git
[/code]

This guide on help.github.com has a bit more information about adding Remotes.

You can change the remote name of “internal” to whatever you want. You could also rename the remote which points to GitHub (“origin”) to something else, so it’s clearer where it is pushing to:

[code lang=bash]
git remote rename origin github
[/code]

With your remotes ready, to keep the servers in sync you push to both of them, one after the other:

[code lang=bash]
git push github master
git push internal master
[/code]

  • Pros: Really simple
  • Cons: It’s a little more typing when pushing changes

The Automated Way:

[code lang=text]
+———+ +———-+ /——>
| GitHub | ======> | internal | — deploy –>
+———+ +———-+ \——>
^
|
| +———+
L————- | ME! |
+———+
[/code]

The previous method is simple and reliable, but it doesn’t really scale that well. Wouldn’t it be nice if the internal server did the extra work?

The main thing to be aware of with this method is that you wouldn’t be able to push directly to your internal server – if you did, then the changes would be overwritten by the process I’ll describe.

Anyway:

One problem I had in setting this up initially, is the local repositories on my PC are cloned from GitHub over SSH, which would require a lot more setup to allow the server to fetch from GitHub without any interaction. So what I did was remove the existing remote, and add a new one pointing to the https link:

[code lang=bash]
(on the internal server)
cd /path/to/repository.git
git remote rm origin
git remote add origin https://github.com/chrismcabz/repo-syncing-test.git
git fetch origin
[/code]

You might not have to do this, but I did, so best to mention it!

At this point, you can test everything is working OK. Create or modify a file in your local copy, and push it to GitHub. On your internal server, do a git fetch origin to sync the change down to the server repository. Now, if you were to try and do a normal git merge origin at this point, it would fail, because we’re in a “bare” repository. If we were to clone the server repository to another machine, it would reflect the previous commit.

Instead, to see our changes reflected, we can use git reset (I’ve included example output messages):

[code lang=bash]
git reset refs/remotes/origin/master

Unstaged changes after reset:
M LICENSE
M README.md
M testfile1.txt
M testfile2.txt
M testfile3.txt
[/code]

Now if we were to clone the internal server’s repository, it would be fully up to date with the repository on GitHub. Great! But so far it’s still a manual process, so lets add a cron task to stop the need for human intervention.

In my case, adding a new file to /etc/cron.d/, with the contents below was enough:

[code lang=bash]
*/30 * * * * user cd /path/to/sync.git && git fetch origin && git reset refs/remotes/origin/master > /dev/null
[/code]

What this does is tell cron that every 30 minutes it should run our command as the user user. Stepping through the command, we’re asking to:

  1. cd to our repository
  2. git fetch from GitHub
  3. git reset like we did in our test above, while sending the messages to /dev/null

That should be all we need to do! Our internal server will keep itself up-to-date with our GitHub repository automatically.

  • Pros: It’s automated; only need to push changes to one server.
  • Cons: If someone mistakenly pushes to the internal server, their changes will be overwritten

Credits

.NET Officially Coming to Mac + Linux in 2015

Straight from the blog of Scott Hanselman:

  • We are serious about open source and cross platform.
    • .NET Core 5 is the modern, componentized framework that ships via NuGet. That means you can ship a private version of the .NET Core Framework with your app. Other apps’ versions can’t change your app’s behavior.
    • We are building a .NET Core CLR for Windows, Mac and Linux and it will be both open source and it will be supported by Microsoft. It’ll all happen at https://github.com/dotnet.
    • We are open sourcing the RyuJit and the .NET GC and making them both cross-platform.
  • ASP.NET 5 will work everywhere.
    • ASP.NET 5 will be available for Windows, Mac, and Linux. Mac and Linux support will come soon and it’s all going to happen in the open on GitHub at https://github.com/aspnet.
    • ASP.NET 5 will include a web server for Mac and Linux called kestrel built on libuv. It’s similar to the one that comes with node, and you could front it with Nginx for production, for example.
  • Developers should have a great experience.
    • There is a new FREE SKU for Visual Studio for open source developers and students calledVisual Studio Community. It supports extensions and lots more all in one download. This is not Express. This is basically Pro.

There’s more over on his blog post, but as a developer, I think this is a very big deal. My first words, when I read about it were pretty much “holy shit.”

Setting Up Chef

I just finished setting up Chef, to have a play around with this DevOps stuff I keep hearing about. While Chef is quite well documented, I found myself struggling in places where things weren’t quite clear enough. So naturally, I’m posting how I got myself up and running.

[Note: I haven’t actually done anything with this setup yet, other than get it working.]

Step One: Get A Server

There are 2 parts to a Chef install: client and server. You can run them all on one machine, but given how much Chef slows down my Joyent VM, I’d suggest keeping it off of your day-to-day workstation.

I used my Joyent credit to setup a new Ubuntu 12.04 64-bit server. Chef server only supports Ubuntu or RedHat/CentOS 64-bit. Once the server was provisioned, I followed this 5-minute guide to lockdown the server enough for my needs (this being just an experiment and all…)

Step Two: Set the Server FQDN

Once the server is prepared, make sure it has a resolvable, fully qualified domain name before going any further. While the Chef docs make mention of this, they do so after the rest of the setup instructions. This was one area I was banging my head against for ages, wondering why the built-in NginX server wasn’t working.

Setting the hostname on my Joyent VM was a case of running:

[code language=”bash”]
$ sudo hostname ‘chef.example.com’
$ echo "chef.example.com" | sudo tee /etc/hostname
[/code]

As I wasn’t on the same network as my Chef server, I added a DNS A record to match the server FQDN.

Step Three: Install Chef Server

This bit was really easy, probably the easiest part of the whole setup. In short: download the latest Chef Server package for your platform, install the package, run the reconfigure tool. In my case, this was:

[code language=”bash”]
$ wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chef-server_11.0.10-1.ubuntu.12.04_amd64.deb
$ sudo dpkg -i chef-server_11.0.10-1.ubuntu.12.04_amd64.deb
$ sudo chef-server-ctl reconfigure
[/code]

The Chef installer will whirr away, using Chef to setup your new installation automatically. How cool is that?

Step Four: Copy Server Certificates to Your Workstation

This wasn’t mentioned anywhere I could see, but I figured it out from some snippets written around the web. To successfully setup the Chef client, you need some security certificates from your new server. I used SCP from my local PC:

[code language=”bash”]
$ scp [email protected]:/etc/chef-server/admin.pem ~/tmp/
$ scp [email protected]:/etc/chef-server/chef-validator.pem ~/tmp/
[/code]

If you find you don’t have permission to copy directly from their default location, SSH to the server and sudo copy them to somewhere you can.

Step Five: Install the Chef Client

Now we should be armed with everything we need to install the client tools. I’m using the Debian-derived Crunchbang, but any *NIX-based OS should be roughly the same as below. If you’re on Windows, I’m afraid you’re on your own.

Run the “Omniinstaller” for Chef:

[code language=”bash”]
$ curl -L https://www.opscode.com/chef/install.sh | sudo bash
[/code]

Create a .chef folder in your home directory, and add the certificates copied from the server

[code language=”bash”]
$ mkdir ~/.chef
$ cp ~/tmp/*.pem ~/.chef
[/code]

Configure Knife (the main Chef CLI utility):

[code language=”bash”]
$ knife configure –initial
WARNING: No knife configuration file found
Where should I put the config file? [/home/chris/.chef/knife.rb] /home/chris/.chef/knife.rb
Please enter the chef server URL: [https://localhost:443] https://chef.example.com:443
Please enter a name for the new user: [chris]
Please enter the existing admin name: [admin]
Please enter the location of the existing admin’s private key: [/etc/chef-server/admin.pem] /home/chris/.chef/admin.pem
Please enter the validation clientname: [chef-validator]
Please enter the location of the validation key: [/etc/chef-server/chef-validator.pem] /home/chris/.chef/chef-validator.pem
Please enter the path to a chef repository (or leave blank):
Creating initial API user…
Please enter a password for the new user:
Created user[chris]
Configuration file written to /home/chris/.chef/knife.rb
[/code]

Test Knife by listing all users:

[code language=”bash”]
$ knife user list
admin
chris
[/code]

Wrap Up

That’s it! You now have a working Chef installation. Or at least, I do. Steps two and four are the steps I had to hunt out and piece together myself to get Chef up and running. Everything else is more or less as documented.

All that’s left to do now is figure out how to use Chef!

Run Coder for Raspberry Pi on Your Linux PC

That cool little “Coder for Raspberry Pi” project from Google which I linked to earlier doesn’t just run on Raspberry Pi. You can run it on any old Linux PC (Mac works too, but the instructions are slightly different).

I set it up in less than 2 minutes using these commands (note that I’m running Debian Sid):

[code lang=”bash”]
sudo useradd -M pi
sudo apt-get install redis-server
cd ~/projects
git clone https://github.com/googlecreativelab/coder.git
cd coder/coder-base
npm install
npm start
[/code]

Node.js is also a requirement, so if you don’t have that, you’ll need to install that at step 2 as well.

Once everything is up and running, point your browser at https://localhost:8081/. You’ll need to specify a password the first time you run Coder, after which you’ll be able to try the environment out. It’s pretty neat, and the sample clone of Asteroids is quite addictive!

Developers and “Ring Rust”

Skills are much like muscles: if you don’t use them for a while they start to atrophy. They say you never forget how to ride a bike, but there are many skills where you will forget things if you don’t do them frequently. The collection of skills needed to be a developer are no exception to the rule.

I’m somewhat speaking from experience here; my current role and workload has removed me from day-to-day development work for about a full year now. I still need to dive in to the code base every day to research issues or change requests, but actually writing something is quite rare these days. I’m aware of the skills problem, and I’ll describe below how I’m trying to address it, but never the less I’ve been self-concious enough about it I’ve recently found myself resisting taking on development tasks. I know it’ll take me a lot longer to get up to speed and complete as one of the developers who’re working on the application every day, and the time-scales involved are usually very tight. It’s a vicious circle: I’m rusty because I’m not doing development, but I’m avoiding development because I’ve been away from it for too long. In the corporate world it’s very easy to get rail-roaded into a niche – and incredibly hard to get out of it.

Time away for a developer is exacerbated by the speed in which technology and techniques moves forward in our industry. What was cutting edge a year-ago is old-hat today, and may even be something you’re encouraged not to do any more. If you haven’t been practising and keeping up developments then you may not be aware and get yourself into all sorts of bother.

So what can you do?

Read. Lots.

Subscribe to a load of developer sites and blogs in Feedly, for one source, but a more convenient way I’ve found to stay on top of things is using Flipboard:

  • Follow other developers on Twitter (actually, you don’t have to, but it’s nice to), and create/add them to a list, such as “Developers & News“.
  • Within Flipboard, add your Twitter account if you haven’t already.
  • Still within Flipboard, go to your Twitter stream. Tap your name at the top and select “Your Lists.”
  • Open the relevant list, then tap the subscribe button.

Your list will be added to your Flipboard sources and you’ll have an always-up-to-date magazine of what’s happening. The reason I suggest Flipboard is that it grabs the link in a tweet, pulls in the article, and will try to reformat it into something you can easily flip through. It makes reading on a tablet so much more enjoyable. Some of the links you get will not be relevant, but a large amount of it will be gold. I try to set aside 30 minutes a day to go through at least the headlines. If work is exceptionally busy I’ll aim for twice a week. Saving to a “Read it Later” service like Pocket is useful for storing the most interesting articles.

What about books? Yes, by all means, read plenty of technical books. They’re usually in far more depth than even the best online article. With tablets, eReaders, and eBooks, the days of thick tomes taking up lots of space are behind us, and no longer a major concern (at least for me). There is however, one major issue with books – they take a long time to write, and are often out of date quickly. The technology might have moved on by the time the book is published. Schemes such as the Pragmatic Programmer’s “Beta Book” scheme help a lot here – releasing unfinished versions of the book quickly and often, to iron out problems before publishing. Of course, you also need to be aware of the topic to be able to pick out a book about it!

Be Curious. Experiment.

Reading all the material in the world will not help you anywhere near as much as actually doing something. The absolute best thing you could do would be to develop side projects in your spare time. Admittedly, if you’re busy, time can be at a premium! Probably a good 99% of side projects I start lie unfinished or abandoned, simply for lack of time. So instead, I perform small experiments.

Curious about something? Do something small to see how it works, or “what happens if…”. Personal, recent, examples would be:

  • Looking into static site generators, and as a result, learning about Jekyll, Github pages for hosting… and as a result of trying out Jekyll templates I brushed up on Responsive Web Design, looked into Zepto, and fell in love with Less.
  • Trying out automating development workflows – installed Node.js (which then allowed me to run this), setup some basic Grunt.js tasks, Imagemagick batch processing, and some more Less.
  • Running Linux as my primary OS, and no Windows partition to fall back on – so in at the deep-end if something goes wrong… but it’s helped me brush up on my MySQL and Apache admin skills again, as well as generally working with the command-line again. The other week I fixed someone’s VPS for them via SSH  – something I would have struggled to do only a few weeks ago. In case you’re interested: the disk was filling up due to an out of control virtual host error log, which I had to first diagnose, and then reconfigure logrotate to keep the site in check.

An earlier example, from before I was entirely away from development: I wanted to see what was different in CodeIgniter 2, so I made a very small app. My curiosity then extended into “how does Heroku work?” – so I deployed to Heroku. I couldn’t pay for a database I knew how to work with, so I tried out a little bit of MongoDB. Then it was the Graph API from Facebook… so again, I extended the application, this time with the Facebook SDK.

Little experiments can lead to a lot of learning. I would never claim to be an expert in any of the technologies I mention, but neither am I ignorant.

Shaking it Out

I’d still need a major project to focus on and really shake off the “ring rust,” to get back up to full development potential, but I’m pretty confident it wouldn’t take as long as if I hadn’t been working on the trying to keep my skills as fresh as I can.

Does Anyone Write Pseudo-Code Any More?

Back in the mists of time, when I was in University 1, one of the very first principles we were taught was writing pseudo-code.

For those of you unfamiliar with the term, pseudo-code is the practice of writing simple, “half code” down, (usually) on paper as a guide to help you work through a problem before even touching an IDE. The goal is to be as high-level as possible, and mostly language independent. It was a combination of plain english and the most basic of code – mostly simple conditionals, loops, etc. Occasionally you would write a function reference if you absolutely needed to. Something below is similar to how I remember being taught pseudo-code, but other examples can be found on the relevant Wikipedia page

I remember pseudo-code being incredibly useful at the time; problems became much simpler to think through, even as a complete novice programmer 2. I wrote pseudo-code for every new problem I had to code a solution to. Somewhere along the line though, I fell out of the habit. I would start in the IDE with a vague idea, then proceed to hack and refine (refactor) as I went along. Chipping away at a problem seems to give a greater sense of forward-momentum, so maybe that is why?

Occasionally I will still bust out the pencil and paper if I am really stuck, or thinking about the problem away from the computer, but these times are rare nowadays. It got me thinking that I don’t recall seeing anyone write pseudo-code in my entire professional career. Is it something we just learn at university and do not carry on into “the real world” as we become more experienced? Is it even taught any more?

Do you still write pseudo-code? Did you ever?

  1. Back when Turbo Pascal was being taught, and Java was yet to enter the curriculum. 
  2. I started my degree in computing with no prior academic experience with computers.  The little “programming” skills I had were writing simple “GOTO” loops on my Commodore 64, many years prior. 

Coda 2 Coming May 24th

Coda 2 Coming May 24th

Coda 2 coming May 24th – it’s about time! Coda is one of the reasons I keep coming back to the Mac platform. It’s one of those apps that is a joy to use. Espresso overtook it for a while, but this new version looks like a very worthy upgrade – check out the Coda Tour video.