Using SQLite to generate documentation and software configurations

Written by Paul Bradley

midjourney ai - da vinci style technical drawing of a mechanical feather

Table of Contents
  1. SQLite as a documentation creation tool
  2. Document Types
  3. Importing the data into SQLite
  4. The create table SQL
  5. Creating documentation code
  6. Creating software configurations

SQLite as a documentation creation tool

Why would you want to use SQLite to maintain documentation? Or use it to manage software configuration files?

In this article I explain the benefits and demonstrate how to use SQLite to generate documentation.

I maintain several documents for a large health project. The documents display the same data in different ways depending on the intended audience of the document.

The first is a list of publishing standards, and the second is a role-based access control matrix. The data set grows and evolves as the project responds to new use cases. Having an automated process for generating the documents from the data is crucial.

Rather than attempting to maintain the two documents separately and potentially getting them out of synchronisation, I decided to use SQLite to hold a central copy of the data and use SQL (Structured Query Language) queries to produce the documentation formatting codes required.

SQLite uses a single terminal command to access all its features. This command, sqlite3, makes it easy to script the steps in your document creation process. The ability to chain SQLite commands together makes the process flexible and repeatable. Also, the output from these commands can be piped into external files. Most documentation build process can include these external files ensuring the latest data is presented within the document.

Leveraging the power of the SQL syntax, you can also control the ordering of the data and the splitting of the large results across multiple pages.

Also, as the whole process is scripted, in my case using make, the commands that generate the documentation can be managed using source code control. Using a source code version system, like Git GNU Make, allows you to track the history of changes made to your documentation data.

Document Types

The data set I maintain is a list of medical document types used in a regional Health Information Exchange. The data set contains three fields: document name, clinical code, and schema. I maintain the list in a simple comma-separated (CSV) data file, like:

1Discharge Letter,823701000000103,SNOMED-CT
2Drug/substance use,1064501000000103,SNOMED-CT
3End of Life Care Document,861411000000103,SNOMED-CT

Importing the data into SQLite

The CSV file is my master data source. If I need to change an entry, I change it within the CSV file and then re-process the data. If I add a new document type to the health exchange, I will add the entry to the CSV file and then run make load to load the data into SQLite for further processing.

I use GNU Make to run the commands I need. It’s not necessary to use make. If your more familiar with using bash scripts, or on Windows batch files, then use those. The make system, for me, is an easy way to group a collection of commands to together.

The make system reads a makefile within your project folder. The makefile contains the commands you want to run in the order you want to run them. The makefile can have multiple targets; denoted by a keyword followed by a colon character. When you run the make command followed by a target name, just the commands within that section will be executed.

I use two targets within my makefile, making it easy for me to load the data and produce the documentation docs. The code below shows the SQLite commands I use to import the CSV file into a database table.

1load:
2    sqlite3 -cmd ".read create_tables.sql"
3            -cmd ".mode csv"
4            -cmd ".import document_types.csv document_types"
5            document_types.db < count.sql

The -cmd parameter of the sqlite3 command-line utility allows you to specify which commands need executing. We need to use multiple -cmd sequences to execute the commands we need in order. In the example above, we start by reading and running the SQL statements within the create_tables.sql file, which drops and re-creates the document types table within the database file document_types.db.

We then switch the mode of operation to CSV before importing the CSV file. The .import command has two parameters. The first parameter is the name of the CSV file containing your data. The second parameter instructs SQLite as to which table the data should be imported into. Finally, we read and execute the count.sql file, which counts the number of records in the table and prints it to the terminal screen.

The create table SQL

The SQL code below shows the contents of the create_tables.sql file. It starts by dropping the table document_types to clear out the old data; and then re-creates the table by supplying the column definitions.

1DROP TABLE IF EXISTS document_types;
2
3CREATE TABLE document_types (
4    "name"      TEXT,
5    "code"      TEXT,
6    "scheme"    TEXT
7);

The contents of the file count.sql is shown below.

1SELECT COUNT(*) || ' : Records Imported'
2  FROM document_types;

Now we have our data in an SQLite database file; we can start by producing the documentation code.

Creating documentation code

It shouldn’t matter which documentation system you use; they all generally support including files. LaTex, Asciidocter and DocBook all support including files. The key is getting SQLite to produce output in the format that your documentation system requires.

Personally, I use LaTeX and the code to display the data within a table is shown below:

1\section*{Document Types}
2\begin{center}
3    \begin{tabular}{p{8cm} p{3cm} p{2.7cm} }
4        \bfseries{Value} & \bfseries{Node} & \bfseries{Schema} \\
5        \toprule
6            \input{doc-types-table.tex}
7        \bottomrule
8    \end{tabular}
9\end{center}

The line of interest in the above code snippet is the line that says input{doc-types-table.tex} This line instructs the LaTeX system to import the commands contained within a file called doc-types-table.tex. We need to get SQLite to generate that file with the data and the correct formatting codes to render it as part of the table.

The LaTeX format for the table row is shown below. The ampersand character separates each column within the table. The end of each row is depicted by the two backslash characters. The \midrule command produces a thin line between the table rows.

1Discharge Letter & 823701000000103 & SNOMED-CT \\ \midrule 

Therefore, we need an SQLite SQL command to output the data in the above format. The SQL I use is shown below. Using the SQLite construct || to concatenate (join) the data fields together, separated by ampersand characters. The SQL statement also sorts the data into name order for me limits the number of records returned to 30, as only 30 entries will fit on a given page.

1SELECT name || ' & ' || code || ' & ' || scheme || ' \\ \midrule ' AS LaTeX
2  FROM document_types
3 ORDER BY name
4 LIMIT 0, 30;

If the above SQL is saved into a file called publishing_standards_page.sql then we can update our makefile to include a docs: target so we can issue the command *make docs” every time we want to re-generate the documentation.

The SQLite command is slightly different as we switch to list mode and redirect the output to the file doc-types-table.tex within our LaTeX project folder.

1docs:
2    sqlite3 -cmd ".mode list"
3        document_types.db
4        < publishing_standards_page.sql
5        > /path/to/latex/doc-types-table.tex

Once you get your head around the idea of your database generating your documentation, you can automate the creation of many different documents for different audiences from the same data set. For example, we also have a request for a new role document, which users must complete before we define new roles. This document is a PDF that again lists the document types but needs to generate tick boxes so people can express which documents the new role should have access to view.

LaTeX is pretty good as it can create forms with a PDF. The SQL below is saved into a file called roles_request_form.sql and generates the LaTeX code required to produce a PDF form. Each checkbox within a form needs to have a unique name. Therefore, I’ve used the SQLite inbuilt keyword ROWID, which returns each record’s unique internal row number.

1SELECT name || ' & \CheckBox[name=' || ROWID ||
2    ', width=1em, bordercolor={lightgray}]{Yes} \\ \midrule ' AS LaTeX
3  FROM document_types
4 ORDER BY name
5 LIMIT 0, 30;

The updated makefile is shown below.

1docs:
2    sqlite3 -cmd ".mode list"
3        document_types.db
4        < publishing_standards_page.sql
5        > /path/to/latex/doc-types-table.tex
6    sqlite3 -cmd ".mode list"
7        document_types.db
8        < roles_request_form.sql
9        > /path/to/latex/doc-roles-form.tex

Creating software configurations

The same process can also be used to generate your software configuration files. The health exchange uses XML files to apply restrictive polices to groups of users. For example, a clinical user can view all document types except two specific documents. The database is used to generate the XML configuration files the software needs to configure these polices.

1SELECT
2  '<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">' ||
3  code  || '</AttributeValue>' AS ClinicalRole
4 FROM document_types
5WHERE code IN ('149741000000107', '723394009')
6GROUP BY code
7ORDER BY code;

The example SQL above extracts the document clinical codes for the two documents clinical staff aren’t allowed to view.

1config:
2    sqlite3 -cmd ".mode list"
3        document_types.db
4        < clinical_role_restrictions.sql
5        > /path/to/software/configs/roles.xml