If you are a Bibliocloud client, you have always been able to click a button to download your metadata directly from Bibliocloud into one beautiful PDF layout. You can do that for one, or a thousand, products in one go. It’s a hugely time-saving feature, and one directly related to selling more books.
AIs remain at the core of the selling-in process, taken into all types of sales meetings between publisher sales teams and their customers: at sales conferences, head office presentations, to wholesalers, at Meet the Buyer events, and at store level. We see no sign of them being superseded by technology any time soon!
Over the years, we have often been asked to implement our clients’ own designs in our code, as part of their onboarding. We take our client’s design, and write the code and tests which allow Bibliocloud to produce AIs in that style on demand. We’ve thus had the chance to see all manner of expert designs, and to learn what works and what doesn’t. We wanted to take all those learnings and produce our own
super-AI – and make it available to all our clients for free as a built-in benefit of their licence fee.
This has been an important project for us and many of our clients so I thought you might enjoy the background story.
Back in April 2017 we wrote up a detailed brief of what we needed our new AI sheet to do, and, along with many examples of our existing AIs, sent them all over to super-designer Tom Spindlow. Our Andy had worked with Tom on many successful prior projects and knew him to be an excellent designer.
Tom soon sent back his first impressions – keen to reinforce the point that this wasn’t a design, but a sketch, intended to ensure that all the required elements were present and weighted according to their importance.
We did some back and forth, demoting certain elements and promoting others, until Tom had enough insight to be able to provide a second cut.
From there, Tom worked his magic and produced some beautiful, clean designs: one serif, one sans. The idea was that the designs be modular and extensible: as we encountered as yet unknown new requirements in the future, we would be able to incorporate them into the basic layout.
At this point we could share the designs with some key clients who we knew were looking for new ideas for their AIs, and refined the design further based on their feedback.
At last, the programming could begin! Andy implemented a future-looking design pattern which would gift us the option to adapt this feature in the future, whilst only doing the programming that was needed at the moment. We pragmatically adjusted the spec to deliver the project in the most time and cost-effective way, balancing the desire to achieve design perfection with the need to deliver value for our customers as rapidly as possible.
In order to check that Bibliocloud produces AIs as expected, we don’t just click on a few links to see what happens.
Our development process
When we write code, it’s on our laptops, on a copy of the main codebase called a branch. Imagine a tree: the trunk of the tree is the official codebase (called
master), and our branch of code is an offshoot, which we can edit without sullying the official version.
Before we share that code with anyone else, we write automated tests to make sure that the code does what we expect it to do and that it’s formatted according to our style guide. We run these on our computer. Green dots mean the test has passed, red means it’s failed, and we can find out what the problem is. We can run tests just for the bit of code we’ve edited, to save time, as the whole test run takes about twenty minutes.
We then share that code with colleagues on a code storage service called Bitbucket, using something called a
pull request to invite comment and sign-off. It’s called that because we request that our new code is pulled into the
master version of the Bibliocloud codebase.
When the code has been buffed to perfection, it’s approved by colleagues and merged into the master branch. It’s now ready to be deployed to the real website.
If the code has been in development for a long time, there will be many tens, even hundreds, of cycles of pull requests. On this project, there were 13 main pieces of work, each with multiple pull requests and commits to the feature branch:
Depending on the scale and nature of the change, we will either send that code to the Bibliocloud website, or to a test or demo website for review.
Before the code is allowed onto the live website, it runs through a
continuous integration service called Codeship which runs the whole suite of our tests again. So if our change has broken something inadvertently in a far flung corner of the application, this process will catch the problem before the code is released.
If the code is a proposed structural change, we will often configure it so that users can flip between the old and the new version, until we have solicited feedback and everyone is up to speed.
Special sorts of tests for PDFs
Normally our tests look a bit like this:
or a bit like this:
These tests check what the data is, or how the browser looks, in certain circumstances. But PDFs are neither data nor browsers, and needed something extra. As well as testing the classes which generate the data, Andy came up with a new sort of integration test to see whether the generated PDF was exactly as expected.
First he generates the AI, programmatically, which we store in our codebase:
Then he wrote tests which generate that same AI, using the current version of the code, and literally overlay one on the other to see if there are any differences. Any differences get reported on; the test fails and we can amend the code to make sure that our changes haven’t altered the expected behaviour of the system.
Once the feature was completed and all the necessary tests written, we pushed the code to the main site. We will be inviting all clients who don’t have a bespoke AI template to review their data in our new format, and hope that they soon begin to enjoy the benefits of this rigorously produced design through selling many more books, as a result!