| CARVIEW |
The hardware
A internet search for DIY flight pedals returns plenty of images, but few documented designs. I copied the simple mechanical arrangement commonly used, and set out to replicate it with a combination of aluminium tubing and 3D printed parts. 3D printing is excellent for small detailed components, but lacks the strength required for larger components, and takes too long to print them in any case. Hence I find that an approach using off-the-shelf dimensioned metal or timber from the local harware store, in combination with 3D printing parts works really well.
The photos below show the physical construction:
The 3D modelling was done with solvespace for some components, and openscad for others. I favour open source cad tools for the same reason I prefer open source software tools: I want to own my creative efforts indefinitely, and only open source tooling can ensure this.
The 3D designs are relatively straightforward. If you have access to a 3D printer it's well worth lurking in places like hackaday to discover tips and trick - I use a couple here.
The mounts that attach to the baseboard embed M4 nuts inside them. The idea here is that you design the model with an internal void for the nut, and arrange for the 3D printer to pause for the nuts to be inserted, after which printing is resumed and the nut ends up completely encased. A photo of the paused print:
The ball joints connect the pedals to the lever arm via threaded rod. These are typically manufactured in metal, and are not inexpensive. The 3D printed design I used came straight from thingiverse. It's clever in that the ball is printed within the socket that wraps it, with a gap that is tuned to be as tight as one's printer can support. In practice the ball end up slightly fused to the socket, but one simply snaps it loose one printing is completed. I'm surprised how well these joints works - they are smooth enough in operation, and there is little slack. The non-smooth plastic on plastic connection will almost certainly wear out with enough use, but in that case it would be a quick job to reprint replacements. In summary - good enough for this purpose!
The electronics
The electronics is so simple it doesn't warrant a schematic diagram. It is an off-the-shelf blue pill development board, with a 10k linear potentiometer wired as a voltage divider on to input pin A0. Power is supplied via the USB port.
The software
The rust programming language intrigues me. I love the idea of a statically typed, modern programming language that can fit into the niches currently dominated by C/C++. High performance code is one such niche, but another is the ability to run "bare metal" on cheap microcontrollers.
Hence while a common approach to building a custom USB HID controller like this would be to use an arduino with a usb joystick library with code written in C++, I wanted to use rust, and chose the ubiquitous and cheap "blue pill" board. Getting an embedded development environment up and running can be a huge time suck - luckily this board is well supported in the embedded rust world, with the excellent blue-pill-quickstart github template project. The project includes enough information to get up and running, and provides links to more detailed documentation. But at the end of it you've only made it as far as the "hello world" of embedded systems: a blinking led.
The requirements are simple, we need:
- Code to configure and manage the USB port so that it acts as a (single axis) HID joystick.
- Code to measure the voltage from the potentiometer using the microcontrollers ADC (analog to digital converter), and supply it to the joystick interface
But doing this from first principles would be a mammoth task. The reference manual for the microcontroller is 1100 pages, with the ADC documentation on pages 215-254, and the USB port on pages 622-652. And even if you studied that, you'd still need to read and understand large chunks of the USB specification to use it. The lesson here is that, even if the embedded computer is tiny, it's still highly complex and configurable, and just as on larger computers, we will need to leverage the libraries and code of others if we want to get things working in a reasonable amount of time.
The good news is that the embedded rust community is vibrant and has released a large collection of tools and libraries. However, its a small community, and there's only limited examples and tutorials to act as guides. And this is one place where rust's programming power and expressiveness makes life harder for learners. In order to work with a wide range of microcontroller architectures, CPUs, and physical boards, the open source libraries are often quite abstract, and need to be composed in collections of abstract APIs and concrete implementations.
After research and trial and error I established that the following rust crates would be needed:
- usb-device - abstract API for USB interfaces
- embedded-hal - Hardware Abstraction Layer for ADCs (and many other types of hardware)
- stm32f1xx-hal- implementations of the above abstractions for the cpu on the blue pill
- usbd-hid - protocol for HID USB devices
with the overall project cargo dependencies being:
[dependencies]
stm32f1xx-hal = { version = "0.5.2", features = ["rt", "stm32-usbd", "stm32f103" ] }
cortex-m = "0.6"
cortex-m-rt = { version = "0.6.8", features = ["device"] }
panic-semihosting = "0.5.2"
embedded-hal = "0.2.4"
usb-device = "0.2.5"
usbd-serial = "0.1.0"
usbd-hid = "0.4.4"
My development strategy was somewhat simplistic - I started with the working blinking led demo, and mutated it by adding features one at at time, often cutting and pasting example code from elsewhere, and then incrementally changing it to what I needed. Whilst unsophisticated, this approach was quicker and easier than getting a deep understanding of each of the libraries I used. A blocker though was that I was unable to find an example of the usbd-hid library being used to implement a joystick. This meant that I had to gain sufficient understanding of the USB HID specification to write the descriptor for a joystick.
Ultimately the entire code is currently only 160 lines of rust. But it was quite an effort to get there!
More Details
This project is not a design that I expect others to duplicate - there's several changes I'd make if I were to produce a second device. But in the interest of homebrew collaboration the code and 3d models are published in the project's github repo.
Future plans
I was expecting to have to implement some calibration in the software, as the potentiometer only turns through a fraction of the total rotational range. And centering the pedals doesn't result in an exactly mid range value from the ADC. But it turns out that the standard PC joystick driver calibration deals with both these issues. I'd still like to sort this out in the embedded code, and not rely on the driver.
Also, I'd like to make an enclosure for the electronics with some additional desktop slider controls for glider trim, flaps, and airbrakes. But the setup works well enough to get some simulated flight time - and right now that's my priority.
]]>This is the third post in a series where we use ADL to build a multi-language system with consistent types. Previously, we have
Here we will implement a statically typed client for the API in typescript.
I think that typescript is presently a sweet spot for web development: it has a decent static type system; it integrates trivially with the rest of the javascript ecosystem; and it has achieved mainstream acceptance. Using ADL to ensure consistent types between the server and a typescript web application greatly boosts developer productivity, especially over time as the API grows.
Our tools
In this post we will focus on a client library for the API, so our external dependencies will be limited. Later, we will create a full web application.
We'll keep our code small and leverage the typescript ecosystem, making use of just a few external dependencies. At runtime:
- node-fetch is used to so we can make API calls from the node VM (as well as the browser).
- base64-js is required by the ADL typescript runtime
The code structure
For reference, the project code structure is as below. There are also the usual files to support typescript and yarn/npm.
| File | Description |
|---|---|
messageboard-api/adl/* |
the ADL definitions |
messageboard-api/scripts/generate-adl.sh |
script to generate code from ADL |
messageboard-api/typescript/src/adl/* |
typescript code generated from the ADL |
messageboard-api/typescript/src/service/service.ts |
The service implementation |
messageboard-api/typescript/src/service/http.ts |
Abstraction for http communications |
An http abstraction
We want to be able to make typed API requests from both the browser, and also from a nodejs VM. However the underlying machinery for making http requests differs in those environments. Hence we will build our typed API atop a trivial http abstraction:
export interface HttpFetch {
fetch(request: HttpRequest): Promise<HttpResponse>;
}
export interface HttpHeaders {
[index: string]: string;
}
export interface HttpRequest {
url: string;
headers: HttpHeaders;
method: "get" | "put" | "post";
body?: string;
}
export interface HttpResponse {
status: number;
statusText: string;
ok: boolean;
text(): Promise<string>;
json(): Promise<{} | null>;
}
The two implementations of this are node-http.ts and browser-http.ts.
Request types and the Api interface
Referring back to the original API design, there are two distinct types of requests: those that are public and don't require an auth token, and the authenticated requests that do. A public request in the ADL of type HttpPost<I,O> will be mapped to a ReqFn<I,O> in typescript:
export type ReqFn<I, O> = (req: I) => Promise<O>;
whereas an authenticated request in the ADL of type HttpPost<I,O> will be mapped to an AuthReqFn<I,O> in typescript:
export type AuthReqFn<I, O> = (authToken: string, req: I) => Promise<O>;
Given these types, our service matching the ADL type Api will meet this typescript interface:
import * as API from "../adl/api";
interface Api {
login: ReqFn<API.LoginReq, API.LoginResp>;
ping: ReqFn<Empty, Empty>;
newMessage: AuthReqFn<API.NewMessageReq,Empty>;
recentMessages: AuthReqFn<API.RecentMessagesReq,API.Message[]>;
createUser: AuthReqFn<API.CreateUserReq,API.CreateUserResp>;
};
The typescript code generated from the ADL contains sufficient metadata to be able to derive both the ReqFn<> or AuthReqFn<> without hand written code. As a concrete example, consider the the recentMessages ADL endpoint definition:
HttpPost<RecentMessagesReq,Vector<Message>> recentMessages = {
"path" : "/recent-messages",
"security" : "token"
};
The typescript function that implements this will have type:
AuthReqFn<API.RecentMessagesReq,API.Message[]>
and needs to:
- Serialise the value of type
RecentMessagesReqto json - Make an http post request to the
/recent-messagespath, with the json body and the provided auth token in theAuthorizationheader. - Wait for the response
- Deserialise the json response to a value of type
Message[]and return as the result of the promise.
We need equivalent logic for every authenticated request. The public requests are almost the same, leaving out the auth token and header.
In our typescript API client, we put the code for this abstracted request logic in the ServiceBase class:
import { HttpFetch, HttpRequest } from "./http";
import * as ADL from "../adl/runtime/adl";
import { HttpPost } from "../adl/types";
export class ServiceBase {
constructor(
private readonly http: HttpFetch,
private readonly baseUrl: string,
private readonly resolver: ADL.DeclResolver,
) {
}
mkPostFn<I, O>(rtype: HttpPost<I, O>): ReqFn<I, O> {...}
mkAuthPostFn<I, O>(rtype: HttpPost<I, O>): AuthReqFn<I, O> {...}
};
This class constructor needs the request abstraction http, the baseUrl to which requests will be made, and also the ADL resolver. A DeclResolver provides access to metadata for all ADL declarations. The class provides two member functions for constructing ReqFn or AuthReqFn values from ADL API endpoint definitions. The implementation of these two functions is straightforward.
The implementation
Given these functions in the ServiceBase class, the implementation of our client is straightforward. The entire code is:
import { HttpFetch } from "./http";
import * as ADL from "../adl/runtime/adl";
import * as API from "../adl/api";
import { AuthReqFn, ReqFn, ServiceBase } from "./service-base";
import { Jwt, Empty } from "../adl/types";
const api = API.makeApi({});
// Implements typed access to the authenticated API endpoints
export class Service extends ServiceBase {
constructor(
http: HttpFetch,
baseUrl: string,
resolver: ADL.DeclResolver,
) {
super(http, baseUrl, resolver);
}
login: ReqFn<API.LoginReq, API.LoginResp> = this.mkPostFn(api.login);
ping: ReqFn<Empty, Empty> = this.mkPostFn(api.ping);
newMessage: AuthReqFn<API.NewMessageReq,Empty> = this.mkAuthPostFn(api.newMessage);
recentMessages: AuthReqFn<API.RecentMessagesReq,API.Message[]> = this.mkAuthPostFn(api.recentMessages);
createUser: AuthReqFn<API.CreateUserReq,API.CreateUserResp> = this.mkAuthPostFn(api.createUser);
};
If a new endpoint is added to the API, then just a single line needs to be added to this implementation. And the end to end usage of ADL ensures that all of the types are consistent and compile time checked, from the server through to the client.
Testing
First start the server, as per the previous post:
$ cd messageboard-api/haskell
$ stack run messageboard-server server-config.yaml
spock is running on port 8080
Then we can write a simple typescript script to exercise our API from nodejs:
After some imports:
import {Service} from './service/service';
import {NodeHttp} from './service/node-http';
import {RESOLVER} from './adl/resolver';
import * as API from "./adl/api";
we instantiate the service client:
const http = new NodeHttp();
const service = new Service(http, "https://localhost:8080", RESOLVER);
and call the public ping endpoint:
await service.ping({});
Logging in is also a public method, but on success returns a token so that we can subsequently call authenticated methods:
const resp = await service.login({
email: "admin@test.com",
password: "xyzzy",
});
assert(resp.kind == 'success');
const adminToken = resp.value;
Hence, as admin, we can post some messages:
await service.newMessage(adminToken, {body: "Hello message board!"});
await service.newMessage(adminToken, {body: "It's quiet around here!"});```
The service-tests.ts script exercises the API more fully. You can run it directly using the ts-node command.
Summing Up
We now have end-to-end type safety between our server and our client, despite the fact they are written in different languages. This is a big step forward in developer productivity. For example, one can extend or refactor the API using the same approach one would in any strongly statically typed environment: change it, and then be guided by the compiler errors to find and fix affected server and browser code.
I'm unsure what posts will follow in this series... I may look at:
- implementing the server in rust or typescript
- using ADL to define a persistence layer behind the server
- using this typescript API client in a react application
Feel free to post questions and comments as issues on the project repo.
]]>This is the second post in a series where we use ADL to build a multi-language system with consistent types. In the first post we wrote the specification for the API. In this post we will implement a server for the API in haskell. This post presents key snippets of the server code - follow the links to the source code repo to see these in context.
Our tools
We'll keep our code small and leverage the haskell ecosystem by making use of the following libraries:
- The Spock web framework
- Data.Password for secure password management
- Web.JWT for Json Web Token functions
The code structure
For reference, the project code structure is as below. There are also the usual files to support stack and cabal.
| File | Description |
|---|---|
messageboard-api/adl/* |
the ADL definitions |
messageboard-api/scripts/generate-adl.sh |
script to generate code from ADL |
messageboard-api/haskell/src/ADL/* |
haskell code generated from the ADL |
messageboard-api/haskell/src/Main.hs |
startup and config parsing |
messageboard-api/haskell/src/Server.hs |
the server implementation |
messageboard-api/haskell/src/Utils.hs |
server helper functions |
messageboard-api/haskell/server-config.yaml |
a server config file for testing |
Configuration and scaffolding
There's not much to do here. Our server main loads configuration, creates initial state, and launches spock. As described previously, by defining our configuration in ADL:
struct ServerConfig {
/// The port which accepts http connections
Int32 port = 8080;
/// The secret used to sign the server's json web tokens
String jwtSecret;
};
we can use the ADL generated haskell code to validate and parse a YAML config file into a well typed haskell value.
Loading the configuration is really the only point of interest in the scaffolding. After than, we just have to create our initial application state, and then launch spock:
main :: IO ()
main = do
args <- getArgs
case args of
[configPath] -> do
eConfig <- adlFromYamlFile configPath
case eConfig of
(Left emsg) -> exitWithError (T.unpack emsg)
(Right config) -> startServer config
_ -> exitWithError "Usage: server <config.yaml>"
startServer :: ServerConfig -> IO ()
startServer sc = do
state <- initAppState sc
spockCfg <- defaultSpockCfg EmptySession PCNoDatabase state
runSpock (fromIntegral (sc_port sc)) (spock spockCfg serverApp)
(see Main.hs)
Our server structure
We are using the ADL API definition discussed in the previous post. For the purpose of this example, we will keep the application state in server memory and use haskell STM to manage concurrent access. (In a future post I'll show how we can implement a persistence layer that leverages ADL to define the persisted data model). Our application needs to maintain a list of the users allows to login, and the messages that have been sent. Here's the core state declaration:
data User = User {
u_email :: T.Text,
u_hashedPassword :: T.Text,
u_isAdmin :: Bool
}
data MyAppState = MyAppState {
mas_serverConfig :: ServerConfig,
mas_users:: TVar [User], -- the users that can login
mas_messages:: TVar [API.Message] -- the messages that have been posted
}
(see Server.hs)
Our spock endpoint handlers will have a somewhat intimidating return type:
type MyHandler o = ActionCtxT MyContext (WebStateM () MySession MyAppState) o
I recommend reading the spock documentation to understand this in detail, but in the context of this post, it's enough to know that MyHandler is a Monad within which one can
- use
liftIOto runIOactions. - use
getStateto access theMyAppStatevalue
Let's delve into to the details of the login API endpoint. It has the following ADL definition:
HttpPost<LoginReq,LoginResp> login = {
"path" : "/login",
"security" : "public"
};
struct LoginReq {
Email email;
String password;
};
union LoginResp {
Jwt success;
Void failure;
};
which, thanks to the ADL compiler, results in haskell definitions for LoginReq, LoginResp, and the http request metadata.
So our login handler will have the following signature:
handleLogin :: API.LoginReq -> MyHandler API.LoginResp
We will write a helper function adlPost that, given the appropriate HttpPost<I,O> metadata connects our handler to the spock server. By "connects" I mean that it will:
- route post requests with the declared path
- check authentication
- deserialize and validate the post request body into the appropriate
Ivalue - call our handler implementation
- serialize the
Ovalue, and send it as the post response body.
The adlPost helper function will have the following signature:
adlPost :: (AdlValue i, AdlValue o)
=> HttpPost i o
-> (i -> MyHandler o)
-> SpockCtxM ctx conn sess MyAppState ()
(The actual implementation will have a slightly more general type to avoid dependence on MyAppState - see below).
This helper function makes implement the spock API very easy. Our spock server is implemented simply by connecting each handler:
serverApp :: SpockM () MySession MyAppState ()
serverApp = do
let api = API.mkApi
adlPost (API.api_login api) handleLogin
adlPost (API.api_newMessage api) handleNewMessage
adlPost (API.api_recentMessages api) handleRecentMessages
adlPost (API.api_createUser api) handleCreateUser
adlPost (API.api_ping api) handlePing
(see Server.hs)
with each handler having the expected, strongly typed signature:
handleLogin :: API.LoginReq -> MyHandler API.LoginResp
handleNewMessage :: API.NewMessageReq -> MyHandler Empty
handleRecentMessages :: API.RecentMessagesReq -> MyHandler [API.Message]
handleCreateUser :: API.CreateUserReq -> MyHandler API.CreateUserResp
Implementing adlPost
As described above, the adlPost function will deal with the endpoint routing, authentication, validation and serialization, ie pretty much all of the boilerplate code typically required for an endpoint. Whilst it has quite a lot to do, it's relatively concise - lets show the code in full here:
-- | Add a spock route implementing an http post request, with the specification for
-- the request supplied as a value of type HttpPost.
--
-- Assuming a request body of type i, and a response body of type o, the resulting
-- handler implements JWT based authorization checks, and request and response parsing
-- and serialization.
adlPost :: (AdlValue i, AdlValue o, HasJwtSecret st)
=> HttpPost i o
-> (i -> ActionCtxT (Maybe JWTClaimsSet) (WebStateM conn sess st) o)
-> SpockCtxM ctx conn sess st ()
adlPost postmeta handler = prehook checkAuth $ post path runRequest
where
path = fromString (T.unpack (hp_path postmeta))
checkAuth = do
jwtSecret <- getJwtSecret <$> getState
case hp_security postmeta of
HS_public -> return Nothing
HS_token -> Just <$> getVerifiedJwtClaims jwtSecret
HS_adminToken -> do
claims <- getVerifiedJwtClaims jwtSecret
when (not (isAdmin claims)) $ do
error401 "needs admin"
return (Just claims)
runRequest = do
mjv <- jsonBody
case mjv of
Nothing -> error400 "json body not well formed"
(Just jv) -> do
let pv = runJsonParser jsonParser [] jv
case decodeAdlParseResult " from post body " pv of
Left e -> error400 e
Right i -> do
o <- handler i
json (adlToJson o)
(see Utils.hs)
It takes two parameters: postmeta is metadata describing the post request, and handler is the application handler function. The request and response bodies (type i and o) must be ADL values, (which they will be given that the postmeta value was generated by the ADL compiler). Our type signature is generalized from that show previously in that it can work with any spock state (type st) provided that we have a means of extracting a jwt secret from that state. This secret is needed to validate JWTs and hence check authorization.
It return a monadic value of type SpockCtxM which we used above to actually create the spock handler.
adlPost works in two phases - it runs checkAuth as a spock prehook, and then runs the request as a spock post action.
checkAuth performs case analysis to ensure that the incoming request meets the security requirements for the endpoint as per the api spec. If the endpoint is public there is no check to perform. If the endpoint requires a token, we verify that the request has a correctly signed Json Web Token. If the endpoint requires an admin token, we also verify that the valid JWT has an isAdmin claim. The prehook returns the JWT, which hence becomes the spock request context. This context is accessible in request handlers.
Assuming that we pass the authorization checks, runRequest
- extracts the post request body as json
- parses the json into a value of type
i - calls the application handler
- serializes the result of type
ointo json - sends that response back to the API client (with a response code of 200)
If either of the first two steps fails, a bad request (400) response code will result.
Whew! Quite a lot of explanatory text for a small function. But it's a tribute to haskell's expressiveness that we can write a function sufficiently abstract that that it implements the API boilerplate for our whole API.
Implementing the application logic
Whilst the main goal for this post was to demonstrate ADL API definitions, let's complete the server by fleshing out the API application logic. We've got 4 methods to implement:
handleLogin :: API.LoginReq -> MyHandler API.LoginResp
The login endpoint needs to
- verify that a user with the given email address exists
- verify that the password supplied matches the stored scrypt hash
- construct a JWT for the user that embeds the email address and login
The JWT (JSON Web Token) is returned to the client, and is subequently provided to the server as proof that a login has succeeded.
See Server.handleLogin for the implementation code.
handleNewMessage :: API.NewMessageReq -> MyHandler Empty
The new message endpoint simply accepts message text from the client, and appends it and some metadata to the message list in the server state. The implementation accesses the spock request context to recover the JWT (already validated by postAdl), in order to determine the email of the user posting the message.
See Server.handleNewMessage for the implementation code.
handleRecentMessages :: API.RecentMessagesReq -> MyHandler [API.Message]
This endpoint is trivial - the handler just needs to extract the requested number of messages from the application state, and return them to the client.
See Server.handleRecentMessages for the implementation code.
handleCreateUser :: API.CreateUserReq -> MyHandler API.CreateUserResp
In our application, only admin users are authorized to create new users, but that is specified in the API definition, and hence is checked before the handler is called. The handler must:
- verify that there is not an existing user with the requested email address, and if this is the case, indicate it to the client.
- hash the provided password, and add the new user to the application state.
See Server.handleCreateUser for the implementation code.
Testing
If you've checked out the project source code, you can build and run the server with stack:
$ cd messageboard-api/haskell
$ stack run messageboard-server server-config.yaml
spock is running on port 8080
Whilst we plan to build a strongly typed client for the API, we can test it now via curl. For demo purposes the initial app state includes a test user. Let's try issuing a post login request with an empty body:
$ curl https://localhost:8080/login -d '{}'
Unable to parse a value of type api.LoginReq from post body : expected field email at $
OK - the 400 error tells us what is wrong with our request. Let's fill it in correctly with the test user's details (as per the ADL LoginReq type):
$ curl https://localhost:8080/login -d '{
"email": "admin@test.com",
"password": "xyzzy"
}'
{"success":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFkbWluQHRlc3QuY29tIiwiYWRtaW4iOnRydWV9.1mZfzhRO_hubbFI2LNBj7wnYUwThTMlSfVaawenX33Y"}$
Success. We now have a JWT for future requests as the initial test user. Put it in a shell variable, and let's see if there are any messages:
$ JWT=...token...
$ curl https://localhost:8080/recent-messages -H "Authorization:Bearer $JWT" -d '{
"maxMessages": 10
}'
[]
No. So let's post a few:
$ curl https://localhost:8080/new-message -H "Authorization:Bearer $JWT" -d '{
"body": "First post!"
}'
{}
$ curl https://localhost:8080/new-message -H "Authorization:Bearer $JWT" -d '{
"body": "and a followup"
}'
{}
... and check that we can fetch them (using jq to tidy up the formatting):
$ curl -s https://localhost:8080/recent-messages -H "Authorization:Bearer $JWT" -d '{
"maxMessages": 10
}' | jq .
[
{
"body": "and a followup",
"postedAt": "2020-05-04T09:32:11.258139377",
"postedBy": "admin@test.com",
"id": "2"
},
{
"body": "First post!",
"postedAt": "2020-05-04T09:31:04.024827574",
"postedBy": "admin@test.com",
"id": "1"
}
]
Finally, let's create a new user, and excercise the API as that user:
$ curl -s https://localhost:8080/create-user -H "Authorization:Bearer $JWT" -d '{
"email": "user@test.com",
"password": "notmuchofapassword",
"isAdmin": false
}'
{"success":"2"}
$ curl https://localhost:8080/login -d '{
"email": "user@test.com",
"password": "notmuchofapassword"
}'
{"success":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6InVzZXJAdGVzdC5jb20iLCJhZG1pbiI6ZmFsc2V9.48FYSck2FwaBwQgwhBIiQVH7ks5rmcvcPmSwoEpBZ6E"}
$ JWT2=...token...
$ curl https://localhost:8080/new-message -H "Authorization:Bearer $JWT2" -d '{
"body": "Greetings!"
}'
{}
$ curl -s https://localhost:8080/recent-messages -H "Authorization:Bearer $JWT2" -d '{
"maxMessages": 10
}' | jq .
[
{
"body": "Greetings!",
"postedAt": "2020-05-04T09:45:16.443301183",
"postedBy": "user@test.com",
"id": "3"
},
{
"body": "and a followup",
"postedAt": "2020-05-04T09:32:11.258139377",
"postedBy": "admin@test.com",
"id": "2"
},
{
"body": "First post!",
"postedAt": "2020-05-04T09:31:04.024827574",
"postedBy": "admin@test.com",
"id": "1"
}
]
Summing up
With only a small amount of code, we have implemented our API in haskell, and abstracted out all of the boilerplate code associated with:
- de/serialization
- validation
- authorization
leaving us to implement the application logic in a strongly typed framework. Hopefully the utility of using ADL to specify the API and associated data types is apparent. ADL's value increases with a more realistic project where:
- multiple languages are involved
- the API grows, with more endpoints and more complex data types
- the API evolves over time
In my next post, I will demonstrate how we can build a typescript client for this API.
Feel free to post questions and comments as issues on the project repo.
]]>This post is the first of a series where I will demonstrate using the ADL system to specify an HTTP based API, and implement conforming servers and clients in different programming languages.
In this post, I will explore how ADL can be used to specify APIs, and do this for a simple application. The API will be small enough for demonstration purposes, but will include login, authorization, and basic application functions.
Future posts will implement servers for this API in haskell and rust, and an API client in typescript (for use in the browser). The ADL type definitions will "glue" the multi-language system together, ensuring consistent static types between languages. Hence ADL's mantra:
Consistent types everywhere!
Why not use ...?
In this post we are using ADL to define a API as one would with other API definition languages such as openapi, grpc and similar tools. ADL has some key benefits compared with such tools including:
- parameterized types (aka generics)
- custom type mappings
- general purpose annotations
More importantly ADL differs in that it is intended as a general purpose tool for data modelling. Here we are using in to specify an API, but it also appropriate for other purposes (eg specifying relational data models, automatically generated forms, type checked configuration files, etc)
Our application and it's API
Our sample application is somewhat of a cliche: a multi-user message board. It will have the following features:
- Users must login to access the application
- Once logged in, users can view recent messages and post new messages
- Certain users will have "admin" privileges and they are able to create new users.
Our API will be implemented conventionally: as JSON messages passed over HTTP. Given a specification of the API in ADL, the ADL compiler will be used to translate that specification into types and values in our programming languages of choice (here: haskell, rust and typescript). Then, in each of those programming languages we will write generic library code to interpret that specification and implement the boilerplate associated with serialization, validation, and authorization. We will be left to implement just the application logic itself.
ADL doesn't have any baked in knowledge of the HTTP protocol. So we must start by declaring a data type that captures our specification for an HTTP request. In our simplified API, all requests will be HTTP post requests. If one desired a more "RESTy" api then there would be similar definitions for the other HTTP methods.
// A post request with request body of type I, and response
// body of type O
struct HttpPost<I,O> {
String path;
HttpSecurity security;
TypeToken<I> reqType = null;
TypeToken<O> respType = null;
};
union HttpSecurity {
// The endpoint is publically accessible
Void public;
// A JWT is required in a bearer authoration header
Void token;
// A JWT with an admin claim is required in a bearer authoration header
Void adminToken;
};
Let's pull this definition apart. For each API request we can make we need to specify:
- the type of the request body sent to the server:
I - the type of the response returned to the client:
O - the http path for this request
- the authorization rules for this endpoint.
As per the subsequent HttpSecurity definition, in our simple security model API endpoints can be public, or require a token (proving that a user has logged in), or requiring an admin token (proving that a user has logged in and has admin rights).
The HttpPost structure captures all this information as a runtime value which we will interpret with library code to implement all of the boilerplate for our the endpoints. Hence we will need access to a runtime representation of the I and O types using the ADL TypeToken<> primitive.
This all probably seems a bit abstract, so lets now use HttpPost to define our first endpoint:
struct Api {
HttpPost<LoginReq,LoginResp> login = {
"path" : "/login",
"security" : "public"
};
...
};
struct LoginReq {
Email email;
String password;
};
union LoginResp {
Jwt success;
Void failure;
};
type Jwt = String;
type Email = String;
Our runtime inspectable API will be a value of type Api. This is a struct, with a field for each request endpoint. We use the ADL defaulting mechanism to specify the values associated with each endpoint.
As you can see above, the login endpoint will accept a Json serialized value of type LoginReq, and return a LoginResp sum type value, with a Json Web Token on success. It's a public endpoint, so doesn't require authentication to call.
Let's flesh out the remaining API methods to complete our API definition:
struct Api {
/// Login to obtain an authorization token
HttpPost<LoginReq,LoginResp> login = {
"path" : "/login",
"security" : "public"
};
/// Retrieve recent messages posted to the server
HttpPost<RecentMessagesReq,Vector<Message>> recentMessages = {
"path" : "/recent-messages",
"security" : "token"
};
/// Post a new message
HttpPost<NewMessageReq,Empty> newMessage = {
"path" : "/new-message",
"security" : "token"
};
/// Create a new user, recording their hashed password
HttpPost<CreateUserReq,CreateUserResp> createUser = {
"path" : "/create-user",
"security" : "adminToken"
};
/// Trivial public method to test server liveness
HttpPost<Empty,Empty> ping = {
"path" : "/ping",
"security" : "public"
};
};
...
struct NewMessageReq {
String body;
};
struct RecentMessagesReq {
Int32 maxMessages;
};
struct CreateUserReq {
Email email;
Password password;
Bool isAdmin;
};
union CreateUserResp {
UserId success;
Void duplicateEmail;
};
struct Message {
String id;
Email postedBy;
TimeStamp postedAt;
String body;
};
Hopefully these methods should be fairly self explanatory.
The timbod7/adl-demo github repository will host the code for this blog post series. It currently contains
- the ADL definitions
- the script to do the code generation
- the generated haskell and typescript
Feel free to ask questions in this repo's issue tracker.
Next...
The API is defined, my next post will implement a compliant server in haskell. My previous post on using ADL from haskell may be useful background reading.
]]>In ADL, optionality in the data model is part of a value's type. One uses either the Nullable<T> primitive or the Maybe<T> type from the adl standard library. For example:
struct Person {
String name;
Nullable<String> phoneNumber;
};
In our model, every person has a name, but a having phone number is optional. But according to the ADL serialization specification, both fields must still be present in the serialized value. Hence {"name":"Tim"} is invalid. If Tim doesn't have a phone, you'd need to serialize as {"name":"Tim": "phoneNumber": null}.
If you want a field to be defaulted in the serialized form, you must provide a default value in the ADL type, ie:
struct Person {
String name;
Nullable<String> phoneNumber = null;
};
With this type, {"name":"Tim"} would be a valid value. (Note that defaults can be fully structured values, not just primitives)
This distinction is important, as it's often useful to have default values that are not optional. Consider when we need to extend Person with gender information. If we do it in this way:
struct Person {
String name;
Nullable<String> phoneNumber = null;
Gender gender = "unspecified";
};
union Gender {
Void female;
Void male;
Void unspecified;
};
then every pre-existing serialized Person value will still be valid, and will assume a gender value of unspecified.
Another use for defaults without optionality is where we have large data types with many fields values, most of which are defaulted. As a concrete example, consider a configuration for an application web server:
struct MyAppServerConfig {
DbConnectionConfig dbConnection;
Word16 httpPort = 8080;
LogLevel logLevel = "error";
};
struct DbConnectionConfig {
String host;
Word16 port = 5432;
String dbName = "myapp";
String username;
String password;
Word16 connectionPoolMinSize = 4;
Word16 connectionPoolMaxSize = 16;
};
In this case one only needs to provide values for the db host, username and password and can rely on the defaults for the other fields:
{
"dbConnection" : {
"host": "localhost",
"username": "test",
"password": "test"
}
}
Note that defaults are not only used in deserialization. In the ADL language backends only the non defaulted fields need to be specified when constructing an in memory ADL value.
on Maybe<T> vs Nullable<T>
As mentioned above, ADL has two parameterized types representing optionality: the Nullable<T> primitive or the Maybe<T> type from the adl standard library.
Originally ADL didn't have the Nullable primitive, relying on Maybe<T> from the ADL standard library, with the expect definition as a and ADL union (ie sum type). A consequence with Maybe<T> defined in ADL that way is that the serialised Json is as it would be for any other union: "nothing" or {"just": t}. I was fine with this, but some users strongly prefer to see null or t in the json. So the Nullable<T> primitive was added, that serializes in the way that people expect.
Note that Nullable<T> is less expressive than Maybe<T> in that you can't usefully nest it. Maybe<Maybe<T>> is semantically useful, where as Nullable<Nullable<T>> is not, as the serialized representation can't represent all of the types values.
Hence Nullable<T> should only be used when T does not permit a serialized null. (TODO: make this a type check in the ADL compiler).
to my new site at
This is now hosted at github pages, and uses Chris Penner's slick website generator. slick is a
a static site generator written and configured using Haskell... (it) provides a small set of tools and combinators for building static websites on top of the Shake build system
It's worked out well for me so far.
]]>- http apis (in lieu of openapi/swagger)
- database schemas (in lieu of sql)
- configuration files
- user interface forms
and then as the base for code generation in haskell, java, rust, c++ and typescript.
But, because ADL has a variety of uses, the path to getting started can be unclear. As a small stand alone example, this post shows how ADL can be used to specify the syntax of a yaml configuration file, and automate its parsing into haskell.
To follow along with this project, you'll need the ADL compiler installed and on your shell PATH.
We'll assume that our project is some sort of server which will load a yaml configuration at startup. Jumping right in, we can specify the config schema in a file adl/config.adl:
module config {
struct ServerConfig {
Int32 port;
Protocol protocol = "http";
LogLevel logLevel = "info";
};
union Protocol {
Void http;
SslConfiguration https;
};
struct SslConfiguration {
FilePath certificate;
FilePath certificateKey;
};
type FilePath = String;
union LogLevel {
Void error;
Void warn;
Void info;
Void debug;
Void trace;
};
};
Being minimal, our ServerConfig has a port, some protocol information, and a logging level. The port has no default value, so is required in the configuration. The other fields are optional, with the given defaults being used in their absence. Note the protocol field is a union (aka a sum type). If it is http then no other information is required. However, if the protocol is https then paths for ssl certificate details are required. The full syntax and meaning of ADL is in the language documentation.
We've specified the data type for the server configuration, and we could now run the compiler to generate the corresponding haskell types and support code. The compiler does its best to generate idiomatic code in the target languages, but additional language specific information can improve the generated code. ADL annotations are used for this. Such annotations can be included in-line in the adl source code, though this get a little noisy when annotations are included for multiple targets - it gets hard to see the core type definitions themselves in a sea of annotations.
Hence ADL has a standard pattern for language specific annotations: such annotations for an ADL file x.adl are kept in the file x.adl-lang. Hence the adl compiler, when reading config.adl to generate haskell code, will look for and include the adl file config.adl-hs for haskell related annotations.
In this example, config.adl-hs is straightforward:
module config {
import adlc.config.haskell.*;
annotation ServerConfig HaskellFieldPrefix "sc_";
annotation Protocol HaskellFieldPrefix "p_";
annotation SslConfiguration HaskellFieldPrefix "ssl_";
annotation LogLevel HaskellFieldPrefix "log_";
};
Recent language extensions notwithstanding, haskell's record system is somewhat primitive (try a google search for "haskell record problem"). A key issue is that record field names need to be unique in their containing module. To ensure this, by default, the haskell ADL code generator prefixes each field with its type name. Hence the ServerConfig declaration would generate:
data ServerConfig = ServerConfig
{ serverConfig_port :: Data.Int.Int32
, serverConfig_protocol :: Protocol
, serverConfig_logLevel :: LogLevel
}
Whilst this guarantees that the generated code will compile, those field names are unwieldy. Hence the HaskellFieldPrefix annotation allows a custom (or no) prefix to be used. With the above config.adl-hs annotations, we get a more friendly:
data ServerConfig = ServerConfig
{ sc_port :: Data.Int.Int32
, sc_protocol :: Protocol
, sc_logLevel :: LogLevel
}
With the ADL written it's time to run the ADL compiler to generate the haskell code:
adlc haskell \
--outputdir src \
--package ADL \
--rtpackage ADL.Core \
--include-rt \
--searchdir adl \
adl/*.adl
The --include-rt and --rtpackage arguments tell the code generator to include the runtime support files, making the generated code self contained. See the haskell backend documentation for details.
I generally check the generated code into the source repository. Whilst this approach has some drawbacks, it has benefits too:
- you don't need the ADL compiler installed to build the package
- you can build with your off-the shelf standard build system (cabal, cargo, tsc etc)
The main downside is that changing the source ADL requires explicitly rerunning the ADL compiler. In most projects I have a scripts/generate-adl.sh script to automate this step. Of course, if your build system is up to it, you may wish to generate the ADL derived code on demand.
We can now write some haskell code!
ADL's core serialization schema is json (a alternate binary scheme is planned). In the generated haskell, every ADL value is an instance of the AdlValue type class, and then the library has helper functions to automate deserialization:
adlFromByteString :: AdlValue a => LBS.ByteString -> ParseResult a
adlFromJsonFile :: AdlValue a => FilePath -> IO (ParseResult a)
decodeAdlParseResult :: AdlValue a => T.Text -> ParseResult a -> Either T.Text a
If one wished to have a configuration file in json format, the latter two functions are sufficient to read and parse such a file. But json is less than ideal for human written configuration, due to its lack of support for comments, and its rigid syntax. The ADL core doesn't have yaml support, but conveniently the haskell Data.Yaml package can parse yaml into json values, which the ADL core can then parse into ADL values. This is the approach we will take, and we write a yaml specific function to load an arbitrary ADL value:
import qualified Data.ByteString.Lazy as LBS
import qualified Data.Text as T
import qualified Data.Yaml as Y
import ADL.Core(runJsonParser, decodeAdlParseResult, AdlValue(..), ParseResult(..))
adlFromYamlFile :: AdlValue a => FilePath -> IO (Either T.Text a)
adlFromYamlFile file = (decodeAdlParseResult from . adlFromYamlByteString) <$> (LBS.readFile file)
where
adlFromYamlByteString :: (AdlValue a) => LBS.ByteString -> (ParseResult a)
adlFromYamlByteString lbs = case Y.decodeEither' (LBS.toStrict lbs) of
(Left e) -> ParseFailure ("Invalid yaml:" <> T.pack (Y.prettyPrintParseException e)) []
(Right jv) -> runJsonParser jsonParser [] jv
from = " from " <> T.pack file
Hopefully this is fairly self explanatory. It:
- reads the input file contents as a bytestring
- parses the yaml parser into a in-memory json value
- parses the in memory json value into an adl value
whilst turning parse failures at either level into user friendly error messages.
With this helper function, the scaffolding for our server process is straightforward. We read an environment variable for the configuration file path, use the adlFromYamlFile written previously, and launch our (dummy) server code.
main :: IO ()
main = do
let configEnvVar = "CONFIG_PATH"
mEnvPath <- lookupEnv configEnvVar
case mEnvPath of
Nothing -> exitWithError (configEnvVar <> " not set in environment")
(Just envPath) -> do
eConfig <- adlFromYamlFile envPath
case eConfig of
(Left emsg) -> exitWithError (T.unpack emsg)
(Right config) -> startServer config
exitWithError :: String -> IO ()
exitWithError emsg = do
hPutStrLn stderr emsg
exitFailure
startServer :: ServerConfig -> IO ()
startServer sc = do
case sc_protocol sc of
P_http -> putStrLn ("Starting http server on port " ++ (show (sc_port sc)))
P_https{} -> putStrLn ("Starting https server on port " ++ (show (sc_port sc)))
threadDelay 1000000000
The simplest configuration yaml specifies just the port, relying on the ADL defaults for other fields:
port: 8080
An example that overrides the protocol, and hence must provide additional information:
port: 8443
protocol:
https:
certificate: /tmp/certificate.crt
certificateKey: /tmp/certificate.key
The ADL json/yaml serialization schema is straightforward. One point of note is that ADL unions (like Protocol in the example) are serialized as single element objects. See the serialisation documentation for details.
The parser provides helpful error messages. In the above example config, if you leave out the last line and fail to set the SSL key, the error is:
Unable to parse a value of type config.ServerConfig from demo-server-example3.yaml:
expected field certificateKey at protocol.https
Hopefully this post has given a simple but useful demonstration of ADL usage from haskell. It's really only a starting point - the ADL system's value increases dramatically when used to ensure consist types between systems written in multiple languages.
The complete code for this demonstration, include build and dependency configuration can be found in its github repo.
]]>When designing systems, I place great value in applying the "make illegal states unrepresentable" principle[^1]. Using ADTs to more accurately model data is a excellent step in this direction. However, it's a burden to do in languages like java that lack support for sum types.
Even for regular product types (ie records of fields) java can be tedious. Defining a record of a few fields should really only take a corresponding few lines of code. Yet for a useful value type in java one will generally need to write: constructors, accessors, a comparison function, a hash implementation, serialisation logic etc. It's common in the java world to use IDEs to automatically generate this kind of boilerplate, but subtle bugs can creep in over time as the once generated code isn't manually updated to reflect subsequent changes in the data model.
Hence, at Helix we now often use my ADL language to define data types, and generate the corresponding java code from them. As a tiny example, these adl definitions (see complete file here):
struct Rectangle
{
Double width;
Double height;
};
union Picture
{
Circle circle;
Rectangle rectangle;
Vector<Picture> composed;
Translated<Picture> translated;
};
result in the corresponding Rectangle.java and Picture.java. These two definitions alone correspond to 280 lines of java code (that you really don't want to write and maintain). As can be seen in the Translated<> type, parametric polymorphism is supported.
I find that being able to define data types concisely encourages me to build more accurate data models, resulting in systems that are more robust and better reflect the problem domain. And ADL's multi language support (java, haskell, typescript) allows us to easily serialize and transfer the corresponding data values between our java services, and our typescript web and mobile UIs.
]]>At the time of writing the voteflux software is incomplete, and there is not yet a rigorous specification for how the voting system will work. The voteflux website explains the system at a high level, but leaves questions unanswered. Discussions in the group's slack forums fill in some details, and the parties founders have answered some questions of my own.
In an effort to improve my own understanding of the voteflux ideas, and provide a basis for discussion with others, I've attempted to write an executable specification for the system in Haskell. All of the key logic is in Flux.hs. This was a worthwhile exercise - having to write concrete types and corresponding code made me consider many questions which weren't apparent when thinking less rigourously. Going forward, I intend to build some simulations based upon this code.
Note that this code has no endorsement from the voteflux party - it represents my own efforts to understand the proposed system. But I like their plans, and hope they do well in the election.
]]>Most of the information below is now out of date. The stack build tool has made everything much simpler. Getting started just a case of installing with
... and then leaving the management of ghc installations up to stack.
Haskell on Yosemite (OSX 10.10)
Nearly all my development has been done under linux. Only occasionally have I worked under osx. This is all to change - osx is to be my primary development platform. In the past, my experiences with ghc on osx have been a little fraught. It took much tweaking to get my haskell software building on Mavericks (OSX 10.9). Problems I had included:
- issues with ghc 7.6 and the xcode c preprocessor
- manual management of the c dependencies of various packages, and then getting cabal to find them
- getting gtk to build
etc, etc.
I'm pleased to discover that things have improved immensely. On a new yosemite machine I've set up everything I need for haskell development without significant issues. A combination of 3 things work together:
- The "ghcformacosx" minimal distribution
- The brew OSX package manager
- Cabal sandboxes
What follows is an overview of the steps I took to get up and running in haskell on osx 10.10.
1. Install the xcode command line tools
Everything (including ghc) seems to depend on these.
2. Install Brew
This is quick and easy, following the instructions on the brew homepage.
3. Install ghcformacosx
"ghcformacosx" is a "drag and drop" installation of ghc 7.8.4 and cabal 1.22.0.0. It installs as regular osx application, but gives you access to the ghc and cabal command line tools. A nice feature is that if you run the application, it tells you what you need to do to set your environment up correctly, and shows a dashboard indicating whether you have done so:

Once this is done you need to bring the local package database up to date:
4. Use brew to install some key tools and libraries
One of my libraries has pcre-light as a transitive dependency. It needs a corresponding c library. Also cairo is the fastest rendering backend for my haskell charting library, and gtk is necessary if you want to show charts in windows. Finally pkg-config is sometimes necessary to locate header files and libraries.
brew install pkg-config
brew install pcre
# gtk and cairo need xquartz
brew tap Caskroom/cask
brew install Caskroom/cask/xquartz
# later steps in the build processes need to find libraries
# like xcb-shm via package config. Tell pkg-config
# where they are.
export PKG_CONFIG_PATH=/opt/X11/lib/pkgconfig
brew install cairo
brew install gtkA nice feature of brew is that whilst it installs libraries and headers to versioned directories in /usr/local/Cellar, it symlinks these back into the expected locations in /usr/local. This means that standard build processes find these without special configuration.
5. Setup some favorite command line tools
I use pandoc and ghc-mod alot, and still need darcs sometimes. Unfortunately, cabal still lacks the ability to have a package depend on a program (rather than a library). Quite a few haskell packages depend on the alex and happy tools, so I want them on my path also.
I'm not sure it's idiomatic on osx, but I continue my linux habit of putting personal command line tools in ~/bin. I like to build all of these tools in a single cabal sandbox, and then link them into ~/bin. Hence, assuming ~/bin is on my path:
cd ~/bin
mkdir hackage
(cd hackage && cabal sandbox init)
(cd hackage && cabal sandbox install alex happy)
ln -s hackage/.cabal-sandbox/bin/alex
ln -s hackage/.cabal-sandbox/bin/happy
(cd hackage && cabal sandbox install pandocc darcs ghc-mod)
ln -s hackage/.cabal-sandbox/bin/pandoc
ln -s hackage/.cabal-sandbox/bin/darcs
ln -s hackage/.cabal-sandbox/bin/ghc-mod(In the sequence above I had to make sure that alex and happy were linked onto the PATH before building ghc-mod)
6. Build gtk2hs in its own sandbox
The hard work is already done by brew. We can use build gtk2hs following the standard build instructions:
export PKG_CONFIG_PATH=/opt/X11/lib/pkgconfig
export PATH=.cabal-sandbox/bin:$PATH
mkdir gtk2hs
cd gtk2hs
cabal sandbox init
cabal install gtk2hs-buildtools
cabal install gtkNote how we need to ensure that the sandbox is on the path, so that the command line tools built in the first call to cabal install can be found in the second.
Summary
All in all, this process was much smoother than before. Both ghcformacosx and brew are excellent pieces of work - kudos to their developers. ghc is, of course, as awesome as ever. When used with sandboxes cabal works well (despite the "cabal hell" reputation). However, having to manually resolve dependencies on build tools is tedious, I'd really like to see this cabal issue resolved.
Update [2015-03-01]
One issue cropped up after this post. It turns out that ghc-mod has some constraints on the combinations of ghc and cabal versions, and unfortunately the combination provided in ghcformacosx is not supported. I worked around this by installing a older version of cabal in ~/bin:
]]>sinusoid2 = plot_points_title .~ "fn(x)"
$ plot_points_values .~ mydata
$ plot_points_style . point_color .~ opaque red
$ def
This is much simpler and cleaner that the corresponding code using native record accessors, but it still has a certain amount of syntactic overhead.
I've added a simple state monad to the library to further clean up the syntax. The state of the monad is the value being constructed, allowing the use of the monadic lens operators. The above code sample becomes:
sinusoid2 = execEC $ do
plot_points_title .= "fn(x)"
plot_points_values .= mydata
plot_points_style . point_color .= opaque red
This may seem only a minor syntactic improvement, but it adds up over an typical chart definition.
A few other changes further reduce the clutter in charting code:
- A new Easy module that includes helper functions and key dependencies
- Simpler "toFile" functions in the rendering backends
- Automatic sequencing of colours for successive plots
All this means that a simple plot can now be a one liner:
import Graphics.Rendering.Chart.Easy
import Graphics.Rendering.Chart.Backend.Cairo
mydata :: [Double,Double]
mydata = ...
main = toFile def "test.png" $ plot $ points "lines" mydata
But this extends naturally to more complex charts. The code differences between the new stateful API versus the existing API can been seen in this example.
The stateful API is available in chart v1.3 It is a thin layer over the existing API - both will be continue to be available in the future.
]]>With any group of beginners, and especially children, simple tooling is really important. Being able to run examples in minutes of turning on the computer is really important. But running even the simplest of traditional toolchains requires at least a rudimentary understanding of:
- a text editor
- the file system
- a command line
- an interpreter/compiler
And there's platform issues here also - even when the language is platform independent the other items will vary. It would be very easy to get bogged down in all this well before actually writing a program that does something interesting...
Hence I was excited several weeks ago when Chris announced the reimplementation of his codeworld environment. In a nutshell, it's a web site where:
1) you edit haskell code in your browser 2) it gets compiled to java script on the remote server using ghcjs 3) the javascript runs back in the browser
and it comes with a beginner-friendly prelude focussed on creating pictures, animations, and simple games (no monads required!).
This was just in time for school holidays here in Sydney - my own children to be my "guinea pig" students. Nick (aged 14) is in year 9 at school, whereas Sam (aged 12) is in year 7. At school they have covered simple algebra, number planes, and other math ripe to be used for something more fun than drill exercises! They have a younger brother Henry (aged 10), who has being observing with interest.
Our goal is to learn to draw pictures, then move on to animations, and, further down the track (if we get there) write some games. After a couple of 2 hour sessions, it has gone remarkably well.
So what have we done? Here's a short outline of our two sessions so far:
Session 1 (2.5 hours):
We discussed the nature of computers, programming languages, compilers.
We launched the codeworld environment, and played with the demos. We tried changing them, mostly by adjusting various constants, and found they broke in often entertaining ways.
We typed in a trivial 2 line program to draw a circle, and made it work. We observed how problems were reported in the log window.
We talked about what a function is, and looked at a few of the builtin functions:
solidCircle :: Number -> Picture
color :: Color -> Picture -> Picture
(&) :: Picture -> Picture -> Picture
... and looked at how they can be composed using haskell syntax.
Then we played!
After this, we introduced some extra functions:
solidRectangle :: Number -> Number -> Picture
translate :: Number -> Number -> Picture -> Picture
rotate :: Number -> Picture -> Picture
scale :: Number -> Number -> Picture -> Picture
which let us draw much more interesting stuff. The rest of this session was spent seeing what cool stuff we could draw with these 7 functions.
Nick programmed some abstract art:

Sam coded up a sheep:

That ended the session, though the boys found some unsupervised time on the computer the next day, when Nick built a castle:

and Sam did some virtual surfing:

Session 2 (2 hours):
In the second session, we started by talked about organising code for clarity and reuse.
The transformation functions introduced in the previous session caused some confusion when used in combination. We talked about how each primitive worked, and how they combined - the different between rotating and then translating versus translating then rotating was investigated.
The boys were keen to move on to animations. I thought we'd leave this for a few sessions, but their enthusiasm overruled. This required that we looked at how to write our own functions for the first time. (In codeworld an animation is a function from time to a picture). This is quite a big step, as we needed to get at least a basic idea of scoping also.
Nevertheless we battled on, and got some movement on the screen. It was soon discovered that rotations are the most interesting transform to animate, as you don't lose you picture elements off the screen as time goes to infinity!
Nick and Sam needed more assistance here, but still managed to get some ideas working. I've only got single frames of their results. Sam produced his space race:

and Nick made a working clock (which tells the right time if you push the run button at 12 oclock!):

In the next session we are going to have to look at numerical functions in a bit more detail in order to produce more types of animations. Time for some graph paper perhaps...
Summary
For a beta (alpha?) piece of software, relying on some fairly advanced and new technology, Codeworld works remarkably well. And Chris has plans for it - there's a long list of proposed enhancements in the github issue tracker, and a mailing list has just been created.
Right now the main issue is documentation. It works well with an already haskell-literate tutor. Others may want to wait for the documentation, course guides, etc to be written.
If you are a haskell enthusiast, Give it a try!
]]>Hence, I spend a little time learning about the cabal API, and wrote a short script that:
- reads several cabal files specified on the command line
- merges these into one overall set of dependencies
- displays the depencies in such a way that inconsistent version constrains are obvious
Here's some example output:
$ runghc ~/repos/merge-cabal-deps/mergeCabalDeps.hs `find . -name '*.cabal'`
* loaded Chart-gtk-1.1
* loaded Chart-1.1
* loaded Chart-tests-1.1
* loaded Chart-cairo-1.1
* loaded Chart-diagrams-1.1
Chart:
>=1.1 && <1.2 (Chart-cairo,Chart-diagrams,Chart-gtk,Chart-tests)
Chart-cairo:
>=1.1 && <1.2 (Chart-gtk,Chart-tests)
Chart-diagrams:
>=1.1 && <1.2 (Chart-tests)
Chart-gtk:
>=1.1 && <1.2 (Chart-tests)
SVGFonts:
>=1.4 && <1.5 (Chart-diagrams)
array:
-any (Chart,Chart-cairo,Chart-gtk,Chart-tests)
base:
>=3 && <5 (Chart,Chart-cairo,Chart-diagrams,Chart-gtk,Chart-tests)
blaze-svg:
>=0.3.3 (Chart-diagrams,Chart-tests)
bytestring:
>=0.9 && <1.0 (Chart-diagrams,Chart-tests)
cairo:
>=0.9.11 (Chart-cairo,Chart-gtk,Chart-tests)
colour:
>=2.2.0 (Chart-diagrams)
>=2.2.1 && <2.4 (Chart,Chart-cairo,Chart-gtk,Chart-tests)
containers:
>=0.4 && <0.6 (Chart-diagrams,Chart-tests)
data-default-class:
<0.1 (Chart,Chart-cairo,Chart-diagrams,Chart-tests)
diagrams-cairo:
>=0.7 && <0.8 (Chart-tests)
diagrams-core:
>=0.7 && <0.8 (Chart-diagrams,Chart-tests)
diagrams-lib:
>=0.7 && <0.8 (Chart-diagrams,Chart-tests)
...
$
As should be evident, all of the imported cabal packages are referenced with consistent version constraints except for colour (which is lacking an upper bound in Chart-diagrams).
The script is pretty straightforward:
import Control.Monad
import Data.List(intercalate)
import System.Environment(getArgs)
import qualified Data.Map as Map
import qualified Data.Set as Set
import Distribution.Package
import Distribution.Version
import Distribution.Verbosity
import Distribution.Text(display)
import Distribution.PackageDescription
import Distribution.PackageDescription.Parse
import Distribution.PackageDescription.Configuration
type VersionRangeS = String
type DependencyMap = Map.Map PackageName (Map.Map VersionRangeS (Set.Set PackageName))
getDependencyMap :: PackageDescription -> DependencyMap
getDependencyMap pd = foldr f Map.empty (buildDepends pd)
where
f :: Dependency -> DependencyMap -> DependencyMap
f (Dependency p vr) = Map.insert p (Map.singleton (display vr) (Set.singleton (pkgName (package pd))))
printMergedDependencies :: [PackageDescription] -> IO ()
printMergedDependencies pds = do
forM_ (Map.toList dmap) $ \(pn,versions) -> do
putStrLn (display pn ++ ":")
forM_ (Map.toList versions) $ \(version,pnset) -> do
putStrLn (" " ++ version ++ " (" ++ intercalate "," (map display (Set.toList pnset)) ++ ")")
where
dmap :: DependencyMap
dmap = Map.unionsWith (Map.unionWith Set.union) (map getDependencyMap pds)
scanPackages :: [FilePath] -> IO ()
scanPackages fpaths = do
pds <- mapM loadPackageDescription fpaths
printMergedDependencies pds
where
loadPackageDescription path = do
pd <- fmap flattenPackageDescription (readPackageDescription silent path)
putStrLn ("* loaded " ++ display (package pd))
return pd
main = getArgs >>= scanPackages
I'd be interested in other tools used for managing suites of cabal configurations.
]]>Monoids are a pretty simple concept in haskell. Some years ago I learnt of them through the excellent Typeclassopedia, looked at the examples, and understood them quickly (which is more than can be said for many of the new ideas that one learns in haskell). However that was it. Having learnt the idea, I realised that monoids are everywhere in programming, but I'd not found much use for the Monoid typeclass abstraction itself. Recently, I've found they can be a useful tool for data analysis...
Monoids
First a quick recap. A monoid is a type with a binary operation, and an identity element:
class Monoid a where
mempty :: a
mappend :: a -> a -> a
It must satisfy a simple set of laws, specifically that the binary operation much be associative, and the identity element must actually be the identity for the given operation:
mappend a (mappend b c) = mappend (mappend a b) c
mappend mempty x = x
mappend x mempty = x
As is hinted by the names of the typeclass functions, lists are an obvious Monoid instance:
instance Monoid [a] where
mempty = []
mappend = (++)
However, many types can be Monoids. In fact, often a type can be a monoid in multiple ways. Numbers are monoids under both addition and multiplication, with 0 and 1 as their respective identity elements. In the haskell standard libraries, rather than choose one kind of monoid for numbers, newtype declarations are used to given instances for both:
newtype Sum a = Sum { getSum :: a }
deriving (Eq, Ord, Read, Show, Bounded)
instance Num a => Monoid (Sum a) where
mempty = Sum 0
Sum x `mappend` Sum y = Sum (x + y)
newtype Product a = Product { getProduct :: a }
deriving (Eq, Ord, Read, Show, Bounded)
instance Num a => Monoid (Product a) where
mempty = Product 1
Product x `mappend` Product y = Product (x * y)
We've now established and codified the common structure for a few monoids, but it's not yet clear what it has gained us. The Sum and Product instances are unwieldly - you are unlikely to want to use Sum directly to add two numbers:
Prelude> :m Data.Monoid
Prelude Data.Monoid> 5+4
9
Prelude Data.Monoid> getSum (mappend (Sum 5) (Sum 4))
9
Before we progress, however, let's define a few more monoid instances, potentially useful for data analysis.
data Min a = Min a | MinEmpty deriving (Show)
data Max a = Max a | MaxEmpty deriving (Show)
newtype Count = Count Int deriving (Show)
instance (Ord a) => Monoid (Min a) where
mempty = MinEmpty
mappend MinEmpty m = m
mappend m MinEmpty = m
mappend (Min a) (Min b) = (Min (P.min a b))
instance (Ord a) => Monoid (Max a) where
mempty = MaxEmpty
mappend MaxEmpty m = m
mappend m MaxEmpty = m
mappend (Max a) (Max b) = (Max (P.max a b))
instance Monoid Count where
mempty = Count 0
mappend (Count n1) (Count n2) = Count (n1+n2)
Also some helper functions to construct values of all these monoid types:
sum :: (Num a) => a -> Sum a
sum = Sum
product :: (Num a) => a -> Product a
product = Product
min :: (Ord a) => a -> Min a
min = Min
max :: (Ord a) => a -> Max a
max = Max
count :: a -> Count
count _ = Count 1
These functions are trivial, but they put a consistent interface on creating monoid values. They all have a signature (a -> m) where m is some monoid. For lack of a better name, I'll call functions with such signatures "monoid functions".
Foldable
It's time to introduce another typeclass, Foldable. This class abstracts the classic foldr and foldl functions away from lists, making them applicable to arbitrary structures. (There's a robust debate going on right now about the merits of replacing the list specific fold functions in the standard prelude with the more general versions from Foldable.) Foldable is a large typeclass - here's the key function of interest to us:
class Foldable t where
...
foldMap :: Monoid m => (a -> m) -> t a -> m
...
foldMap takes a monoid function and a Foldable structure, and reduces the structure down to a single value of the monoid. Lists are, of course, instances of foldable, so we can demo our helper functions:
*Examples> let as = [45,23,78,10,11,1]
*Examples> foldMap count as
Count 6
*Examples> foldMap sum as
Sum {getSum = 168}
*Examples> foldMap max as
Max 78
Notice how the results are all still wrapped with the newtype constructors. We'll deal with this later.
Composition
As it turns out, tuples are already instances of Monoids:
instance (Monoid a,Monoid b) => Monoid (a,b) where
mempty = (mempty,mempty)
mappend (a1,b1) (a2,b2) = (mappend a1 a2,mappend b1 b2)
A pair is a monoid if it's elements are monoids. There are similar instances for longer tuples. We need some helper monoid functions for tuples also:
a2 :: (a -> b) -> (a -> c) -> a -> (b,c)
a2 b c = (,) <$> b <*> c
a3 :: (a -> b) -> (a -> c) -> (a -> d) -> a -> (b,c,d)
a3 b c d = (,,) <$> b <*> c <*> d
These are implemented above using Applicative operators, though I've given them more restrictive types to make their intended use here clearer. Now I can compose monoid functions:
*Examples> let as = [45,23,78,10,11,1]
*Examples> :t (a2 min max)
(a2 min max) :: Ord a => a -> (Min a, Max a)
*Examples> foldMap (a2 min max) as
(Min 1,Max 78)
*Examples> :t (a3 count (a2 min max) (a2 sum product))
(a3 count (a2 min max) (a2 sum product))
:: (Num a, Ord a) =>
a -> (Count, (Min a, Max a), (Sum a, Product a))
*Examples> foldMap (a3 count (a2 min max) (a2 sum product)) as
(Count 6,(Min 1,Max 78),(Sum {getSum = 168},Product {getProduct = 8880300}))
It's worth noting here that the composite computations are done in a single traversal of the input list.
More complex calculations
Happy with this, I decide to extend my set of basic computations with the arithmetic mean. There is a problem, however. The arithmetic mean doesn't "fit" as a monoid - there's no binary operation such that a mean for a combined set of data can be calculated from the mean of two subsets.
What to do? Well, the mean is the sum divided by the count, both of which are monoids:
newtype Mean a = Mean (Sum a,Count) deriving (Show)
instance (Num a) => Monoid (Mean a) where
mempty = Mean mempty
mappend (Mean m1) (Mean m2) = Mean (mappend m1 m2)
mean v = Mean (Sum v,Count 1)
So I can calculate the mean if I am prepared to do a calculation after the foldMap:
*Examples> let as = [45,23,78,10,11,1.5]
*Examples> foldMap mean as
Mean (Sum {getSum = 168.5},Count 6)
*Examples> let (Mean (Sum t,Count n)) = foldMap mean as in t / fromIntegral n
28.083333333333332
The Aggregation type class
For calculations like mean, I need something more than a monoid. I need a monoid for accumulating the values, and then, once the accumulation is complete, a postprocessing function to compute the final result. Hence a new typeclass to extend Monoid:
{-# LANGUAGE TypeFamilies #-}
class (Monoid a) => Aggregation a where
type AggResult a :: *
aggResult :: a -> AggResult a
This makes use of the type families ghc extension. We need this to express the fact that our postprocessing function aggResult has a different return type to the type of the monoid. In the above definition:
- aggResult is a function that gives you the value of the final result from the value of the monoid
- AggResult is a type function that gives you the type of the final result from the type of the monoid
We can write an instance of Aggregation for Mean:
instance (Fractional a) => Aggregation (Mean a) where
type AggResult (Mean a) = a
aggResult (Mean (Sum t,Count n)) = t/fromIntegral n
and test it out:
*Examples> let as = [45,23,78,10,11,1.5]
*Examples> aggResult (foldMap mean as)
28.083333333333332
*Examples>
Nice. Given that aggResult (foldMap ...) will be a common pattern, lets write a helper:
afoldMap :: (Foldable t, Aggregation a) => (v -> a) -> t v -> AggResult a
afoldMap f vs = aggResult (foldMap f vs)
In order to use the monoids we defined before (sum,product etc) we need to define Aggregation instances for them also. Even though they are trivial, it turns out to be useful, as we can make the aggResult function strip off the newtype constructors that were put there to enable the Monoid typeclass:
instance (Num a) => Aggregation (Sum a) where
type AggResult (Sum a) = a
aggResult (Sum a) = a
instance (Num a) => Aggregation (Product a) where
type AggResult (Product a) = a
aggResult (Product a) = a
instance (Ord a) => Aggregation (Min a) where
type AggResult (Min a) = a
aggResult (Min a) = a
instance (Ord a) => Aggregation (Max a) where
type AggResult (Max a) = a
aggResult (Max a) = a
instance Aggregation Count where
type AggResult Count = Int
aggResult (Count n) = n
instance (Aggregation a, Aggregation b) => Aggregation (a,b) where
type AggResult (a,b) = (AggResult a, AggResult b)
aggResult (a,b) = (aggResult a, aggResult b)
instance (Aggregation a, Aggregation b, Aggregation c) => Aggregation (a,b,c) where
type AggResult (a,b,c) = (AggResult a, AggResult b, AggResult c)
aggResult (a,b,c) = (aggResult a, aggResult b, aggResult c)
This is mostly boilerplate, though notice how the tuple instances delve into their components in order to postprocess the results. Now everything fits together cleanly:
*Examples> let as = [45,23,78,10,11,1.5]
*Examples> :t (a3 count (a2 min max) mean)
(a3 count (a2 min max) mean)
:: Ord a => a -> (Count, (Min a, Max a), Mean a)
*Examples> afoldMap (a3 count (a2 min max) mean) as
(6,(1.5,78.0),28.083333333333332)
*Examples>
The 4 computations have been calculated all in a single pass over the input list, and the results are free of the type constructors that are no longer required once the aggregation is complete.
Another example of an Aggregation where we need to postprocess the result is counting the number of unique items. For this we will keep a set of the items seen, and then return the size of this set at the end:
newtype CountUnique a = CountUnique (Set.Set a)
instance Ord a => Monoid (CountUnique a) where
mempty = CountUnique Set.empty
mappend (CountUnique s1) (CountUnique s2) = CountUnique (Set.union s1 s2)
instance Ord a => Aggregation (CountUnique a) where
type AggResult (CountUnique a) = Int
aggResult (CountUnique s1) = Set.size s1
countUnique :: Ord a => a -> CountUnique a
countUnique a = CountUnique (Set.singleton a)
.. in use:
*Examples> let as = [5,7,8,7,11,10,11]
*Examples> afoldMap (a2 countUnique count) as
(5,7)
Higher order aggregation functions
All of the calculations seen so far have worked consistently across all values in the source data structure. We can make use of the mempty monoid value in order to filter our data set, and or aggregate in groups. Here's a couple of higher order monoid functions for this:
afilter :: Aggregation m => (a -> Bool) -> (a -> m) -> (a -> m)
afilter match mf = \a -> if match a then mf a else mempty
newtype MMap k v = MMap (Map.Map k v)
deriving Show
instance (Ord k, Monoid v) => Monoid (MMap k v) where
mempty = MMap (Map.empty)
mappend (MMap m1) (MMap m2) = MMap (Map.unionWith mappend m1 m2)
instance (Ord k, Aggregation v) => Aggregation (MMap k v) where
type AggResult (MMap k v) = Map.Map k (AggResult v)
aggResult (MMap m) = Map.map aggResult m
groupBy :: (Ord k, Aggregation m) => (a -> k) -> (a -> m) -> (a -> MMap k m)
groupBy keyf valuef = \a -> MMap (Map.singleton (keyf a) (valuef a))
afilter restricts the application of a monoid function to a subset of the input data. eg to calculate the sum of all the values, and the sum of values less than 20:
*Examples> let as = [5,10,20,45.4,35,1,3.4]
*Examples> afoldMap (a2 sum (afilter (<=20) sum)) as
(119.8,39.4)
groupBy takes a key function and a monoid function. It partitions the data set using the key function, and applies a monoid function to each subset, returning all of the results in a map. Non-numeric data works better as an example here. Let's take a set of words as input, and for each starting letter, calculate the number of words with that letter, the length of the shortest word, and and the length of longest word:
*Examples> let as = words "monoids are a pretty simple concept in haskell some years ago i learnt of them through the excellent typeclassopedia looked at the examples and understood them straight away which is more than can be said for many of the new ideas that one learns in haskell"
*Examples> :t groupBy head (a3 count (min.length) (max.length))
groupBy head (a3 count (min.length) (max.length))
:: Ord k => [k] -> MMap k (Count, Min Int, Max Int)
*Examples> afoldMap (groupBy head (a3 count (min.length) (max.length))) as
fromList [('a',(6,1,4)),('b',(1,2,2)),('c',(2,3,7)),('e',(2,8,9)),('f',(1,3,3)),('h',(2,7,7)),('i',(5,1,5)),('l',(3,6,6)),('m',(3,4,7)),('n',(1,3,3)),('o',(3,2,3)),('p',(1,6,6)),('s',(4,4,8)),('t',(9,3,15)),('u',(1,10,10)),('w',(1,5,5)),('y',(1,5,5))]
Many useful data analysis functions can be written through simple function application and composition using these primitive monoid functions, the product combinators a2 and a3 and these new filtering and grouping combinators.
Disk-based data
As pointed out before, regardless of the complexity of the computation, it's done with a single traversal of the input data. This means that we don't need to limit ourselves to lists and other in memory Foldable data structures. Here's a function similar to foldMap, but that works over the lines in a file:
foldFile :: Monoid m => FilePath -> (BS.ByteString -> Maybe a) -> (a -> m) -> IO m
foldFile fpath pf mf = do
h <- openFile fpath ReadMode
m <- loop h mempty
return m
where
loop h m = do
eof <- hIsEOF h
if eof
then (return m)
else do
l <- BS.hGetLine h
case pf l of
Nothing -> loop h m
(Just a) -> let m' = mappend m (mf a)
in loop h m'
afoldFile :: Aggregation m => FilePath -> (BS.ByteString -> Maybe a) -> (a -> m) -> IO (AggResult m)
afoldFile fpath pf mf = fmap aggResult (foldFile fpath pf mf)
foldFile take two parameters - a function to parse each line of the file, the other is the monoid function to do the aggregation. Lines that fail to parse are skipped. (I can here questions in the background "What about strictness and space leaks?? - I'll come back to that). As an example usage of aFoldFile, I'll analyse some stock data. Assume that I have it in a CSV file, and I've got a function to parse one CSV line into a sensible data value:
import qualified Data.ByteString.Char8 as BS
import Data.Time.Calendar
data Prices = Prices {
pName :: String, -- The stock code
pDate :: Day, -- The historic date
pOpen :: Double, -- The price at market open
pHigh :: Double, -- The highest price on the date
pLow :: Double, -- The lowest price on the date
pClose :: Double, -- The price at market close
pVolume :: Double -- How many shares were traded
} deriving (Show)
parsePrices :: BS.ByteString -> Maybe Prices
parsePrices = ...
Now I can use my monoid functions to analyse the file based data. How many google prices do I have, over what date range:
*Examples> let stats = afilter (("GOOG"==).pName) (a3 count (min.pDate) (max.pDate))
*Examples> :t stats
stats
:: Prices
-> (Count,
Min time-1.4:Data.Time.Calendar.Days.Day,
Max time-1.4:Data.Time.Calendar.Days.Day)
*Examples> afoldFile "prices.csv" parsePrices stats
(1257,2008-05-29,2013-05-24)
*Examples>
Perhaps I want to aggregate my data per month, getting traded price range and total volume. We need a helper function to work out the month of each date:
startOfMonth :: Day -> Day
startOfMonth t = let (y,m,d) = toGregorian t
in fromGregorian y m 1
And then we can use groupBy to collect data monthly:
:*Examples> let stats = afilter (("GOOG"==).pName) (groupBy (startOfMonth.pDate) (a3 (min.pLow) (max.pHigh) (sum.pVolume)))
*Examples> :t stats
stats
:: Prices
-> MMap
time-1.4:Data.Time.Calendar.Days.Day
(Min Double, Max Double, Sum Double)
*Examples> results <- afoldFile "prices.csv" parsePrices stats
*Examples> mapM_ print (Map.toList results)
(2008-05-01,(573.2,589.92,8073107.0))
(2008-06-01,(515.09,588.04,9.3842716e7))
(2008-07-01,(465.6,555.68,1.04137619e8))
...
(2013-03-01,(793.3,844.0,4.2559856e7))
(2013-04-01,(761.26,827.64,5.3574633e7))
(2013-05-01,(816.36,920.6,4.1080028e7))
Conclusion
So, I hope I've shown that monoids are useful indeed. They can form the core of a framework for cleanly specifing quite complex data analysis tasks.
An additional typeclass which I called "Aggregation" extends Monoid and provides for a broader range of computations and also cleaner result types (thanks to type families). There was some discussion when I presented this talk as to whether a single method typeclass like Aggregation was a "true" abstraction, given it has no associated laws. This is a valid point, however using it simplifies the syntax and usage of monoidal calculations significantly, and for me, this makes it worth having.
There remains an elephant in the room, however, and this is space leakage. Lazy evalulation means that, as written, most of the calculations shown run in space proportional to the input data set. Appropriate strictness annotations and related modifications will fix this, but it turns out to be slightly irritating. This blog post is already long enough, so I'll address space leaks in in a subsequent post...
]]>VE ConstE a corresponds to VE a in the previous blog post.
]]>Various effort at applying Functional Reactive Programming (FRP) to GUIs. These are somewhat experimental, and tend to be proof of concepts implementing a small range of GUI features (several of these libraries are listed here).
The full blown toolkits which provide a comprehensive imperative binding to mainstream toolkits. The two key contenders here are gtk2hs and wxHaskell.
Whilst enticing, the FRP approach doesn't currently look appropriate for building rich GUI applications. wxHaskell and gtk2hs at least provide the functionality required, but the low level imperative approach based in the IO monad is tedious to a fluent haskell developer. Here's a code snippet:
b <- buttonNew
image <- imageNewFromStock stockAdd IconSizeSmallToolbar
containerAdd b image
set b [buttonRelief := ReliefNone]
on b buttonActivated {
... button activated action ...
}
It's not hard to write this sort of code, but it is tedious, especially considering the amount that is required to build a whole application.
This post outlines my experiments to reduce the amount of imperative code required for GUIs, yet retaining compatibility with the imperative toolkits. Initially I've been focussed on "value editors" (VEs) aka "forms". These are GUI components to capture/edit values of ideally arbitrary complexity. I've two key goals, composability and abstraction.
Composability: I want to be able to compose my value editors effortlessly. Whilst the existing toolkits let you compose widgets using containers and glue code, it's verbose indeed.
Abstraction: I'd like to define my VEs independently from the underlying toolkit. But I'm looking for something more than a thin layer over the existing toolkits. I want to define my VEs in terms of the structure of the values involved, and worry about the formatting and layout later, if at all.
If we take this abstraction far enough, it should be possible to reuse our structural VEs definitions beyond gtk2hs and wxWindows. For example, a JSON generator+parser pair can be considered a VE - in the sense that to edit a value, one can generate the json text, edit the text, and then parse to recover the new value. Of course, it's likely to be a balancing act between abstraction and functionality - we'll have to see how this pans out.
An Abstract UI
OK, enough preamble, here's a GADT I've devised to capture VEs:
-- | A GADT describing abstracted, user interface components for manipulating
-- values of type a.
data VE a where
-- | A String field
Entry :: VE String
-- | An enumeration. A list of label string are supplied,
-- the VE value is the integer index of the selected label.
EnumVE :: [String] -> VE Int
-- | Annotate a VE with a text label
Label :: String -> VE a -> VE a
-- | A "product" VE that combines values from two other VEs
AndVE :: (VE a) -> (VE b) -> VE (a,b)
-- | A "sum" VE that captures the value from either of two other VEs
OrVE :: (VE a) -> (VE b) -> VE (Either a b)
-- | A VE for manipulating a list of values. The supplied function lets the
-- the VE display the list items to the user (eg for selection).
ListVE :: (a->String) -> VE a -> VE [a]
-- | Convert a VE over a type a, to a VE over a type b, given
-- the necessary mappings. Either String captures the potential
-- failure in the mapping.
MapVE :: (a -> Either String b) -> (b -> a) -> VE a -> VE b
-- | Annotate a VE with a default value
DefaultVE :: a -> VE a -> VE a
-- A typeclass to build VEs
class HasVE a where
mkVE :: VE a
(.*.) = AndVE
(.+.) = OrVE
infixr 5 .*.
infixr 5 .+.
And here's an example usage for a simple data type:
data Gender = Male | Female deriving (Show,Enum)
data Person = Person {
st_name :: String,
st_age :: Int,
st_gender :: Gender
} deriving (Show)
instance HasVE Person
where
mkVE = MapVE toStruct fromStruct
( Label "Name" nonEmptyString
.*. Label "Age" mkVE
.*. Label "Gender" mkVE
)
where
toStruct (a,(b,c)) = Right (Person a b c)
fromStruct (Person a b c) = (a,(b,c))
nonEmptyString :: VE String
nonEmptyString = ...
instance HasVE Int ...
instance HasVE String ...
instance HasVE Gender ...
This captures in some sense the abstract semantics for an editor of Person values. We need to capture:
- a non-empty string for the name,
- an integer for the age
- a gender enumeration
and know how to pack/unpack these into a person value.
A GTK UI
But what can we do with this? We need to turn this abstruct VE into a concrete UI. There's a library function to do this for an arbitrary VE:
data GTKWidget a = GTKWidget {
ui_widget :: Widget,
ui_set :: a -> IO (),
ui_get :: IO (ErrVal a),
ui_reset :: IO ()
}
uiGTK :: VE a -> IO (GTKWidget a)
The uiGTK function turns our abstract VE a into GTK component for editing a value of type a. In addition to building the compound widget, it gives us functions to:
- put a value into the widget
- recover a value from the widget
- restore the widget to a default value
A higher level function constructs a modal dialog to get a value of type a from the user.
data ModalDialog e a = ModalDialog {
md_dialog :: Dialog,
md_gw :: GTKWidget a,
md_run :: IO (Maybe a)
}
modalDialogNew :: String -> VE a -> IO (ModalDialog a)
Hence running this:
dialog <- modalDialogNew "Example 2" (mkVE :: Person)
ma <- md_run dialog
Results in this:

The automatically generated dialog is simple, but quite functional:
- invalid fields have a red background, dynamically updated with each keystroke
- Fields have sensible defaults - often invalid to force entry from a user
More complex UIs are of course possible. As should be clear from the VE GADT above we support sum and product types, lists, etc, and can map these with arbitrary code. Hence we can construct GTK UIs for a very large range of haskell values. A slightly more complex example composes the previous VE:
data Team = Team {
t_leader :: Person,
t_followers :: [Person]
} deriving (Show)
instance HasVE Team ...
Resulting in:

Recursive types are supported, so its possible to build GTK VEs for expression trees, etc.
JSON Serialisation
As I alluded to previously, given VE a, we can automatically generate a JSON generator and parser for values of type a:
data VEJSON a = VEJSON {
uj_tojson :: a -> DA.Value,
uj_fromjson :: DA.Value -> Maybe a
}
uiJSON :: VE ConstE a -> VEJSON a
Related Work
Well into working on these ideas, I was reminded of two somewhat similar haskell projects: Functional Forms and Tangible Values. Functional Forms aims to ease the creation of wxHaskell dialogs to edit values. The exact purpose Tangeable Values is a little unclear to me, but it appears to be about automatically generating UIs suitable for visualising function behaviour and exploring functional programming.
Future Work
Currently I have a library that implements the VE GADT to automatically build GTK editors and JSON serialisers. There's many ways to progress this work. Some of my ideas follow...
Whilst the generated GTK editor is a sensible default, there are only very limited ways in which the editor can be customised. I envisage a model where the uiGTK function takes an extra parameter akin to a style sheet, given extra information controlling the UI layout and formatting, etc.
I can envisage many other useful things that could automatically be derived from VE definitions:
- equivalent functionality for wxHaskell
- console GUIs
- Funky UIs implemented with primitives more interesting than the standard toolkit widgets: eg zoomable UIs, or UIs more suited to table based platforms.
- web GUIs. This could be done by automatically generating javascript and corresponding server side haskell code.
Finally, It might be worth investigate whether the GHC Generic mechansism might be used to automatically generate VE definitions.
So there's plenty of directions this work can go, but right now I want to put it to the test and build an application!
]]>NOTE: If you have too old a version of libc, then you will get an error like "floating point exception" from the binaries in these bindists. You will need to either upgrade your libc (we're not sure what the minimum version required is), or use a binary package built for your distribution instead.
I sure don't want to upgrade libc, and to the best of my knowledge there's no binary package built for RHEL. So, I'll need to build it myself from source. But we need ghc to compile ghc, and to make it worse, we need a version >= 6.10, and the binaries for these won't work with libc 2.5 either. So, our approach needs to be:
- Compile and install 6.10.4 using 6.8.3
- Compile a binary distribution of 7.0.3 using 6.10.4
- Install the 7.0.3 binary distribution
- Compile and install the haskell platform 2011.2.0.1
But wait, as it turns out, the RHEL 5.6 C compiler (gcc 4.1.2) doesn't seem to be compatible with recent ghc builds either, giving errors like:
rts/dist/build/RtsStartup.dyn_o: relocation R_X86_64_PC32 against `StgRun' can
not be used when making a shared object; recompile with -fPIC
(there are some details on the building and troubleshooting ghc page) So, you need a more recent gcc also. I could have build this from source also, but luckily I had a working gcc 4.4.3 build already present. For reference, I needed to download:
- ghc-6.10.4-src.tar.bz2
- ghc-6.8.3-x86_64-unknown-linux.tar.bz2
- ghc-7.0.3-src.tar.bz2
- haskell-platform-2011.2.0.1.tar.gz
And here's the commands used:
# General setup
# Assumes downloaded files are in $BASE/downloads
BASE=/tmp/ghc-dev
GCC443DIR=/opt/gcc4.4.3/bin
mkdir -p $BASE/install
mkdir -p $BASE/build
# Start with a 6.8.3 binary
cd $BASE/build
tar -xjf $BASE/downloads/ghc-6.8.3-x86_64-unknown-linux.tar.bz2
export PATH=/usr/bin:/sbin:/bin
cd $BASE/build/ghc-6.8.3
./configure --prefix $BASE/install/ghc-6.8.3
make install
# Build 6.10.4 from src
cd $BASE/build
tar -xjf $BASE/downloads/ghc-6.10.4-src.tar.bz2
export PATH=$BASE/install/ghc-6.8.3/bin:/usr/sbin:/usr/bin:/sbin:/bin
cd $BASE/build/ghc-6.10.4
./configure --prefix $BASE/install/ghc-6.10.4
make
make install
# Build 7.0.3 from src, using 6.10.4 and gcc 4.4.3
# (gcc 4.1.2 from RHEL doesn't seem to work)
cd $BASE/build
tar -xjf $BASE/downloads/ghc-7.0.3-src.tar.bz2
export PATH=$BASE/install/ghc-6.10.4/bin:$GCC443DIR:/usr/sbin:/usr/bin:/sbin:/bin
cd $BASE/build/ghc-7.0.3
./configure
make
make binary-dist
# Unpack and install the 7.0.3 bin-dist
cd /tmp
rm -rf /tmp/ghc-7.0.3
tar -xjf $BASE/build/ghc-7.0.3/ghc-7.0.3-x86_64-unknown-linux.tar.bz2
cd /tmp/ghc-7.0.3
./configure --prefix $BASE/install/ghc-7.0.3
make install
# Unpack and install the haskell platform
cd $BASE/build
export PATH=$BASE/install/ghc-7.0.3/bin:$GCC443DIR:/usr/sbin:/usr/bin:/sbin:/bin
tar -xzf $BASE/downloads/haskell-platform-2011.2.0.1.tar.gz
cd $BASE/build/haskell-platform-2011.2.0.1
./configure --prefix $BASE/install/ghc-7.0.3
make
make install
Be prepared to chew up some CPU cycles! Pleasingly, once I sorted out the gcc version issue, all of the above worked without problems.
]]>sudo apt-get install libsdl1.2-dev
sudo apt-get install libsdl-mixer1.2-dev
cabal-dev install hbeat
Or at least that's what I first thought. The program fired up ok, but failed to respond to mouse clicks as expected. It turns out that this was a pre-existing bug - if the screen redraws don't happen fast enough, hbeat gets further and further behind in it's event processing eventually ignoring everything. A small code fix (now published to hackage) causes out-of-date redraw requests to be dropped. But why was I seeing this problem now? It seems that since I wrote the software, openGL via SDL seems to have got alot slower. The compositing window manager (compiz) seems to be the culprit - it's consuming significant cpu time whilst hbeat is running. Some references to this can be found here. I guess there's a downside to all those fancy compositing effects. It's a shame hbeat is now a fair bit glitchier than it was before. Maybe sometime I'll look at this, but for now at least it still works.
]]>Step 1
Change the Build-Type field in the cabal file to be "Custom". This means cabal will look for a Setup.hs file to control the build.
Step 2
Create a Setup.hs that autogenerates a haskell module containing the version number. Here's mine:
import Distribution.Simple(defaultMainWithHooks, UserHooks(..), simpleUserHooks )
import Distribution.Simple.Utils(rewriteFile)
import Distribution.Package(packageVersion)
import Distribution.Simple.BuildPaths(autogenModulesDir)
import System.FilePath((</>))
import Data.Version(showVersion)
generateVersionModule pkg lbi = do
let dir = autogenModulesDir lbi
let version = packageVersion pkg
rewriteFile (dir </> "Version.hs") $ unlines
["module Version where"
,"version :: String"
,"version = \"" ++ showVersion version ++ "\""
]
myBuildHook pkg lbi hooks flags = do
generateVersionModule pkg lbi
buildHook simpleUserHooks pkg lbi hooks flags
main = defaultMainWithHooks simpleUserHooks {
buildHook=myBuildHook
}
Step 3
Change your program to access the created Version module. It's actually generated in the ./dist/build./autogen directory, but this seems to be correctly on the source path by default.
]]>