Thing Tracker Network

Thing Tracker Site Template

One aspect of the Thing Tracker Network that is perhaps not terribly clear is that it is not dependent on any particular client, library or vendor. This means that anyone could start participating in the network straight away.

As the TTN client that I am working on is taking it’s time I thought it worthwhile to outline how anyone can start integrating with the network today.

At it’s heart the TTN is simply a web API, where meta-data about Things is stored in JSON formatted documents and made available on the web. To this end it would be simple for someone to create the document by hand and host it, either themselves or via a service such as Github, Google Drive, etc.

Whilst JSON is quite an easy format to read it isn’t meant for humans. The TTN client, and similar services, will happily pick up these JSON documents and work with them (searching, indexing, presenting, etc.) but it would be nice to be able to view the information in a human-friendly way.

Therefore I have thrown together a simple site template which when matched with a TTN Tracker JSON document will provide a web site suitable for publishing. It is hosted on Github ( and there is a demo site on Github Pages and also via Google Drive (just to show an alternative option).


Screenshot of thing-tracker-site-template



The steps to set up a site are quite straightforward and are also listed in the Github project readme. I will list them here again as they are so few:

  • Clone or copy (zip) the thing-tracker-site-template project.
  • Modify the contents of tracker.json adding meta-data about your Things.
    • Most attributes are self-explanatory, and most are optional so if something is not needed it can usually be deleted. See the specification for details, and feel free to ask in the community for advice.
    • If JSON is new to you then have a look at the JSON website, or perhaps the lighter introduction here.
  • It is recommended to add thumbnail images in the thumbnails folder so they are loaded locally to the website. These can be referenced in the tracker directly, e.g.
  • Be careful to check that the tracker.json is valid – a wrong trailing comma can cause the site to fail to render.  The JSONLint validator may help.
  • Optionally feel free to modify the look and feel of the site. The site uses AngularJS and AngularUI Bootstrap, and the main css file (css\ttn-client.css) is relatively straight-forward to modify.
  • Once satisfied upload the site to your favourite hosting provider.

Hosting on Github Pages

  • Create a public repository on Github.
    • “thingtracker” would be a good name as the resulting website would be hosted at https://[user name]
    • Remember also that custom domain names can be pointed at Github Pages.
  • Clone locally and create a branch named “gh-pages”.
  • Copy your thing-tracker-site-template into this folder.
  • Commit and push to github.
  • After a short while the site should be available via https://[user name][project name].

Hosting on Google Drive

  • Create a folder in Google Drive.
  • Copy your thing-tracker-site-template into this folder.
  • Edit the share permissions so it is public.
  • Make a note of the folder id from the address bar, e.g.
  • The site should then be available via[folder id]/


Obviously this approach has several shortcomings.

Editing the tracker file in JSON is not terribly user friendly

At first I thought that having to edit the tracker.json file directly would be a show-stopper, but I’ve come around to the idea that there are several benefits: it can be edited in any text editor; it can be edited directly online in Github; the syntax is easy to pick up, etc. Plus, having to enter all the data in a web form is also quite a hassle, and also prone to error.

The disadvantages are that the format has to be correct in order for the site to render (trailing commas is a good example of where it’s easy to trip up). Double quotation marks have to be escaped. It is probably not very clear for non-techies how to approach editing the file.

Therefore the TTN client, and perhaps future iterations of the site-template will have a more accessible forms approach to entering and modifying the data.

Having all things listed in one tracker file is not terribly scalable

This is actually already solved but not yet documented or fully tested. Each Thing in the Tracker could instead consist of a “refUrl” attribute pointing to another JSON document which contains the Thing information. Take a look at the recently updated Specification page for more details.

The resulting website is very simplistic

There will be no design prizes awarded for the web site. However it was quick to set up, neutral in tone, and is similar in nature to the Githubiverse design. Anyone with basic skills in web design will be able to modify the CSS in order to spruce up the look and feel. Who knows, perhaps TTN Site “Skins” will start making the rounds?

The website is missing numerous useful features

Search being the foremost example, commenting perhaps next. Remember that this is simply a starting point in order to get data into the Network. Once services such as search engines become available it should be trivial to plug these in. Implementing a local keyword search would also be a reasonably easy task if it is desired.

If anyone has more suggestions I would be happy to hear about them. Feel free to raise issues in the Github project or simply comment below.

I should mention that I consider this to be a partial replacement of Githubiverse, which suffered from two flaws. Being a pure client side solution there is no (secure) way to provide keys for the Github API, and the default number of calls is simply not usable. Secondly, it assumes that projects are stored in Github, which of course may not be the case. The TTN Site Template requires more work to set up and maintain (at the moment) but it is more flexible, not vendor specific, and a step towards populating the Thing Tracker Network.

So I hope this article gives an idea of how accessible and open the Thing Tracker Network aims to be.  As ever, let me know your thoughts and opinions.

Thing Tracker Network

TTN Client – Brain Dump

A post by Jason Gullickson has prompted a long overdue status report/brain dump on TTN. Since the last update very little has happened and so I want to go over why that is, and also play around with some ideas of how to get going again.

Getting the TTN Client to beta quality (or even alpha-alpha quality) has proven to be a bit more of a challenge than I expected. Finding the time, and motivation, to tackle some of the challenges, on top of work and family commitments, has been hard.

A challenge I mentioned in my last post was how to reconcile having a distributed model where many of the participants may be off-line for a majority of the time? This led me develop a file-based model for the client which could be also used as a static website – allowing it to be transferred to an arbitrary hosting provider. This leads to the question: how to handle multiple revisions of a particular design? So as not to have to transfer and host copious amounts of data. This led me to investigate using a git-based model, which would have the benefit of integrating easier with Github, but sadly the state of Javascript git libraries is such that this is not a trivial thing to implement.

Another challenge is to handle gracefully the synchronisation and caching of information between TTN nodes. Another is how to secure the network, or at least provide safeguards against malicious nodes and and actors. Another is how to provide search capabilities… and so the list grows.

The client currently allows the following:

  • When starting the first time the client creates a set of keys and connects to the DHT via predefined bootstrap nodes.
  • Multiple Trackers can be created and Things added to them. Basic metadata can be defined which follows the TTN Specification. The metadata can be edited, but the screens have no CSS applied at the moment so they look very basic.
  • The Tracker and Things are made available over a basic REST-like Web Service. Separating the method of publishing the TTN data from the DHT network has the benefit of making it available to a wider range of potential clients.
  • Other nodes can be added to the client by their ID. The trackers of the remote node are then loaded and can be browsed – although I think there is a bug whereby the client loses track of the node when looking up it’s tracker and can’t find it’s things – but I have to double check.
  • The remote tracker and things are cached locally. The client does not yet periodically check to see if there are any updates. Currently the cache has to be purged by hand if the client should retrieve the data again.
  • A basic messaging system exists, whereby a node will forward a message it receives to each of it’s siblings – this would allow a way of nodes announcing updates to the network. An alternative may be to use RSS to publish updates, and it probably makes sense to do both so that a wider range of clients can interact.
  • The client can also read a Tracker which might not originate from a TTN Node, i.e. a website or something. I’m not terribly happy about how I have implemented it though (it uses the concept of a dummy TTN Node to wrap the site) and I don’t think it works very well yet. The goal would be that the client could read TTN trackers from any number of sources, not just participants of the DHT network.

Just writing these few lines reinforces to me just how how much the client is trying to achieve. One of my thoughts on tackling this is by modularisation. Currently the client has too many responsibilities, and is trying to cover too many use cases, I think. I’ll try and enumerate the basic requirements:

  • Store metadata about Things in the form of Trackers.
  • Provide a decentralised way of sharing this information.
  • Discover other Trackers and Things.
  • Present this information to the user.

The devil of course is in the detail. Expanding on these:

  • A Thing may have several revisions.
  • A remote Tracker may be off-line.
  • A remote Tracker may contain malicious content.
  • The client should run on various platforms, possibly running headless on a server, host, Raspberry Pi, etc.
  • The metadata should be accessible to non-TTN clients, i.e. via an API.
  • …etc…

So perhaps instead of the client attempting to do all of these things it should be split up into smaller modules which tackles each thing in turn, i.e. a module which does nothing else apart from manage the actual Trackers; another which provides an API for the Trackers; another which connects to the DHT; another which publishes an RSS feed; another which searches for other Trackers, etc.

I also suspect that I should try and not cover so many bases at first, and just try and produce a limited, but stable, solution that is perhaps only understandable by fellow programmers/hackers at first, and then expand in it to make it more suitable for a wider audience.

In any case I hope that I can get past this current blockage and start producing some tangible results soon.

Thing Tracker Network

TTN Relays

In part prompted by Marcus Wolschon’s blog post about leaving Thingiverse  I have been thinking a bit more about Githubiverse and it’s potential role in the Thing Tracker Client I am writing.

Githubiverse is a nice little hack to take advantage of Github Pages to serve a basic landing page for 3D printing projects. It has a nice advantage that it is all client based code and requires no web back-end (other than a Github repository of course), utilising the Github API to retrieve the data it needs. Unfortunately Github then decided (quite reasonably) to limit the number of anonymous requests that can be made by a client to 60 per hour, and considering each Githubiverse page refresh requires several API calls this is soon exhausted. Because all the code for Githubiverse is publicly viewable it cannot take advantage of any of the authorisation schemes available, such as OAuth.

I have used the page layout of Githubiverse as the basis for displaying things in the TTN client, and have for some time being considering how to add a relay (or mirror) function to the client which will allow a persons tracker to be available even if the client is off-line (which I presume may be the case for many people – even I shut off my PC for hours of the day!). The important requirement for me is that any relay function should be independent of any particular vendor or product, i.e. everyone should be free to choose where they host any TTN relay node.

My initial thoughts revolved around hosting solutions, perhaps something simple in PHP so it can be used in a wide variety of hosting providers, but even this adds a burden on the end user which would make the TTN client unattractive to some. I think my current plan should provide enough flexibility to satisfy the requirements I have.

The client will create a static snapshot of the tracker which is ready to upload or copy to a web server. This snapshot has two roles, it serves up the information required by other TTN clients, and also provides a basic website where people can browse the tracker and download the content. The relay site will be easy to modify for those wishing to change the look and feel of it, and a future feature may be to make it easily skinnable.

TTN Relay
In terms of hosting the relay there are several reasonable options (Note: all the examples below have the same content, simply hosted in different locations):

  • Google Drive / Dropbox / etc.: By copying the relay to a local folder shared by Google Drive, Dropbox and similar services, it can then be accessed as a regular static website. (I don’t know if this is still possible with recent Dropbox accounts, but with older ones such as mine which have a Public folder it is).
  • Github / Git: Integration with Github should be relatively straight-forward, pushing a local copy of the relay to a github-pages branch should provide a similar experience to the current Githubiverse solution. Interfacing with git should work over something like gift or maybe even git.js.  And of course it doesn’t have to be restricted to Github, other git repositories could also be used, opening up a host of other options.
  • FTP / Rsync: A third candidate is to upload the relay to a hosting site, and a plethora of synchronisation options are available.

The client should ensure that all internal references are valid for the machine readable aspect of the relay (via valid relative refUrl attributes I think). It could also provide integration to the wider Tracker Network potentially providing search capabilities.

This is all in addition to having the option to host the information directly from the TTN Node itself should one wish to.  The relay concept simply opens up more opportunities for sharing the data – which is the core aim of this project.

Thoughts and opinions are always welcome, either in the comments or on Google+

Thing Tracker Network

Thing Tracker Network Client : Status Report

A little while ago I started work on a Thing Tracker Network (TTN) Client, with the intention of having at least a working beta before the end of the year.  This proved to be rather optimistic, and whilst the work is still ongoing, I thought it useful to write a post outlining the goals I have in mind, and the challenges I believe have to be overcome.  At this stage I believe the core design is reasonable, and so feel comfortable asking for opinions, feedback and help.

The client concept differs from many alternatives by it being a distributed system.  Whereas the majority (all?) of the existing 3D printing repositories require registration to a central service, this client, in conjunction with the TTN, requires none, and the various Things are stored in the network itself.  This brings several interesting challenges as I shall list later.

A few of the factor driving the design decisions are:

  • The client should be cross-platform, and easy to install and use for a wide range of people. (Whilst recognising that the majority of makers are technically very competent.)
  • Should not rely on central services.
  • Should offer some degree of security, i.e. the ability to verify and trust nodes and their content within the network.

After several discussions with others about the design, the (current) outline of the client is as follows.

  • A standalone application, built on Node-Webkit, which uses a Distributed Hash Table (based on KadoH) to connect to other nodes in the Thing Tracker Network.
  • Information is passed between nodes via RESTful Web Services (using Restify), this includes sharing information about Things and lists of Things (i.e. Trackers) – following the TTN Specification.
  • Sharing of actual content, i.e. the actual design files, is planned to run over BitTorrent, however, lacking suitable BT client libraries, in the short term it will be implemented as direct transfer over HTTPS.
TTN Client Architecture

TTN Client Architecture

Even this simple sounding app brings with it challenges.


How to ensure that a node’s content is still available even if the originating node is offline?

It has to be assumed that only a small portion of nodes will be online 100% of the time. This is where using BitTorrent for the content distribution might have a real benefit, besides resource sharing etc, as the content would be available on another network.

The current design allows for content files to be stored anywhere, and so they could be made available in the cloud (Dropbox, personal web server, etc), but it would make for a better user experience when such remote storage was seamlessly integrated into the client in some way.

My current thinking is that each person can opt to have their node act as a relay for other peoples trackers. That is, I might decide to store and make available the tracker and content for, say RichRaps, designs, and should he be offline when someone wishes to download one then the client could ask the network for alternative locations. This fits with my vision of an ecosystem building up around the TTN – where a variety of clients interact in different ways: relays, aggregators, filters, and so on. Someone may decide to donate resources to relay particular designs on behalf of the community, for example the RepRap project could run a client which acts as a relay for all RepRap designs. This would differ from existing offerings in that the original data remains linked to it’s originating node for purposes of ownership and attribution, plus the data would be replicable by other nodes – i.e. the content is not exclusive to one particular server, rather it just happens to be regularly available at that node.

This is one of the major benefits of this design if we can pull it off: We shouldn’t be dependent on the services of others. If a repository falls offline sometime in the future there should be enough redundancy in the network to ensure that designs remain available.


How to verify that a Thing originates from a given node?

The current design uses Public Keys and Signatures to build a web of trust within the network, to verify that a piece of information originates from the node it claims.  This is an area that will need a lot of work in order to make the system trustworthy.

How to store Things retrieved from possibly untrusted sources?

The app runs inside your network and has access to your local file systems, therefore it is very important that any interaction with external resources be controlled and secured as much as possible. For example, I expected my locally running Avast anti-virus to recognise when the client downloads an infected file, but preliminary tests seem to indicate this is not the case, at least with the original settings which do not scan every file by default. One might be able to use a portable virus scanner, such as ClamAV, but this may conflict with making the client easy to install, and is another component to have to manage.


How to announce new Things, or updates to existing Things, to the network?

This is an area I have yet to implement, but I suspect may turn out to be one of the more interesting parts of the client. Using a DHT to connect nodes may allow for interesting ways of passing information between the nodes. For announcing, some form of message passing approach may be best, or some form of publish-subscribe model. These updates could potentially be stored in the DHT itself.

How to discover new nodes?

The primary way of including new nodes would be by manually adding the node ID which is gained “out of band”, via forums, blogs, etc. The client would then look up the information for the node, and optionally traverse the trackers (and nodes) that that node itself is following. I can quite imagine that if I were to start following RichRaps node then I may be interested to follow those that he is following too. In this way the network effect should yield a large set of possible trackers to follow.

An alternative way might be to have a “listening” mode for the client, which listens on the DHT for new nodes and adds them to a list to be reviewed whenever you wish. Furthermore there could be a “promiscuous” mode which listens for new activity and automatically adds the trackers and things it picks up to the local tracker. This is how I imagine an indexing or search-engine client might operate – trawling the network for new information. Again, by promoting the TTN specification, rather than any particular solution, the possibilities as to the type and function of the participants of the network are boundless.


How best to store Thing and Tracker data locally?

If a thing is updated regularly then how to efficiently record these changes without having many redundant copies cluttering the local harddrive? The immediate idea might be to use something like git to store local Things, but that again may conflict with the ease-of-use/installation driving factor.


How to handle the fact that Things may have multiple versions?
This adds another complication to the network, not only in terms of organisation and storage, as mentioned above, bit also in terms of hereditary and attribution. How can we effectively track the ancestors which make up a design?


How should the application look and feel? Could it be skinnable, open to extension, etc.?

By basing the design around the TTN, rather than a particular client, the hope is that others can take the client as a basis for their own designs, or build something from scratch. By making the information available over RESTful web services the barrier to creating a client is greatly reduced.

Interestingly the UI aspect of the client appears to take a large proportion of the time, not due to technical reasons, but because of trying to decide how to make the client accessible to non-technical people.

The application uses AngularJS for the front-end, and this certainly helps reduce the amount of boilerplate in getting the app up and running.

Work In Progress Screenshot

Work In Progress – Tracker Page

Use Case

The above gives a slight indication as to the aspects that I have been considering, and this is by no means an exhaustive list. I’d like now to run through a use case to show how I imagine the client to work in practice. It is quite broad, but also dives deep into the technical details now and again to exemplify aspects of it.


  • First of all someone downloads the latest version of the client and runs it on their local PC.
  • Because it is the first time it has run the client recognises that there is no local keys for the node and so generates both public and private keys, and uses the hash of the public key as the node id. These values will remain for the life of the node (although we need a way of handling the case when a node is compromised and the keys have to be regenerated – you no doubt see how these seemingly small features give way to ever more issues to deal with!)
  • The user is prompted to optionally provide information about the node, and set particular parameters, such as where to store files locally, whether to automatically cache trackers, etc.
  • The client comes with several well-known bootstrap nodes configured, and these are contacted to announce that the new node is joining the network.


  • The client may optionally come with a set of nodes pre-installed, which are contacted and their trackers retrieved and displayed.
  • The user may choose to browse these trackers and their Things.
  • Should they choose to download the content locally, the target node is queried for the content and if available a zip file of the content is stored locally.
    • Optionally, if the node is verified (more later) then the zip file might automatically be unzipped in the client cache so that individual files can be accessed from within the client. Otherwise the user would have to confirm any action involving the remote content until it is deemed verified, e.g. after it has been scanned for viruses.
    • Here we see an opportunity to use the network to help verify the contents of a particular Thing. Some form of flagging scheme may be useful, whereby nodes can flag content as being malicious or objectionable, and if nodes I trust and have verified, my friends to speak, have marked a Thing as being suspicious I might take extra care when working with it, or ignore the Thing, Tracker, or even node, entirely.
  • The user may choose to manually add a node to his list to follow. The node id may have been retrieved via a blog post, email or other means, and is pasted into the client. The remote node has a status of “unverified” at this stage.
  • The remote node is contacted and a list of trackers is returned, these are displayed in the client.
    • If the node is not available then the client may ask the network if anyone else has a cached copy of the node’s trackers.
  • The remote node also returns information about itself, including its public key.
  • The user may then optionally choose to verify the node by providing the fingerprint of the public key belonging to the remote node. This would be at best obtained out-of-band, via a different medium to the source of the node id, and in a way in which the user can be sure originates from the node owner. Note that at this stage no claims are made as to the security of this method. This is one area which would require input and review from those with much more experience in application security. Personal contact, email, and multiple verified sources, may all help in instilling a level of trust that the fingerprint of the node key really originates from the node owner, and that to a certain degree the node can be verified. Furthermore, the option exists to build a web of trust, whereby a node verified by node I have verified has more weight than an unverified node. This is something to be explored further.


The user then may choose to publish their own Thing in the network.

  • As this is a fresh install no Tracker yet exists, therefore the user would be prompted to create one.  The thinking being that people would be free to create several Trackers, each of which may track different types of projects.  For example one tracker for personal projects, one for RepRap related designs, another for the local hackerspace, etc.
  • To create a Thing, a form would prompt for information about the Thing, including the location of the files that make up the release.
    • The Thing could be saved in draft form so it can be worked on over a period of time.
  • When the release is ready to publish the client will zip the content into a file ready for others to download, add the Thing to the Tracker, and send an announcement to the network.
  • If the user then finds a typo, or other minor fault, in the release, they may choose to edit certain fields whilst the version remains unchanged.  I think that for certain changes it would be wise to ensure that the version is incremented.
Work In Progress - Thing Page

Work In Progress – Thing Page


Hopefully this gives an indication of the factors involved in designing and producing such a client.  There’s a whole lot more in addition to the above.  the challenge now is simply finding the time, and motivation, to produce a client that can at least give a reasonable experience for those willing to try it out.  I’m a firm believer that if the Thing Tracker Network Client can be made to work then it could make a huge impact in the maker world, bringing control back to the individual.

I should say that the code so far is available on Github, but I really can’t recommend trying to use it (although I’ll help anyone who wants to have a go) as the design is still in a state of flux, plus the code quality leaves a lot to be desired as several different approaches and techniques have been tried, and it really needs a good refactoring to make it a bit easier to delve into. However, with that warning, the project can be found here:

There is also a test node running on an Amazon EC2 instance at with the node id “a0a371095c6193c4b15f366d31ac4911a072e634” if anyone is really adventurous! 🙂




Thing Tracker Network

Thing Tracker Network Update

Time flies! It’s been almost two months since I posted about the Thing Tracker Network (TTN). Recently progress has slowed down to a crawl, mainly due to other priorities (RepRap Magazine, work, family, imminent house move, etc, y’know the usual things), and also because I am taking a slight detour with regard to TTN which I want to briefly describe here.

After the initial excitement of announcing the proposal there have been a series of suggestions and improvements which I have attempted to merge into the specification. I think the current version is a reasonable basis for defining Things and the relationships between them. Expanding to include the possibility of adding detailed bill-of-materials and tracker metadata, for example.

A quick aside: there are now several resources that make up the infrastructure of TTN which I will quickly list:
  • The website is the public face of the proposal and will be updated on an ad-hoc basis to reflect the current state of the proposal.
  • The Github repository holds the live specification along with examples. In the near future it will hopefully hold reference libraries, tools, and example applications. There is also a wiki and issue tracker to handle documentation and tasks.
  • A Google+ community to hold discussions, and an associated G+ page which is the source of “official” statements (as opposed to my personal thoughts as a fellow hacker).

Several people have mentioned in passing that the technology of the fabled Semantic Web, now known as Linked Data apparently, may be of interest. A little internet research led me to a useful resource in the Linked Data Book, and the more I read of it the more I realised that the TTN sounds like it is basically trying to be a Linked Data vocabulary. Consequently I think it is worth exploring the idea further, to see whether the goal of the TTN should be to define such a vocabulary along with tools and libraries to support it.

There are several considerations should a Linked Data approach be taken.

  • Many of the concerns that have cropped up in discussing TTN have already been discussed in the Semantic Web world, such as identity, security, and reliability.
  • A set of tools and libraries have already been developed which could potentially be utilised.
  • Linked Data seems ideally suited for describing the types of things the TTN is all about.
  • Using the language and specifications of the Semantic Web might interface the TTN to the wider internet of things, as espoused by Tim Berners-Lee.
  • The Linked Data specifications (e.g. RDFa,RDFS,OWL) are, however, more complex than those originally envisaged by TTN, and the learning curve to fully understand how the network works may be steeper.
  • Using a recognised and widely adopted standard would seem preferable than devising a proprietary, albeit open, standard.

From my reading so far, I believe the following would be possible using Linked Data:

  • Define a vocabulary using RDFS (Resource Description Framework Schema) which would be similar (if not identical) to the current TTN json schema.
  • Mark up a Thing with metadata attributes using one of the following ways:
    • RDFa (Resource Description Framework in attributes) to embed the TTN attributes within a web page describing a Thing.
    • JSON-LD (JavaScript Object Notation for Linked Data) to describe the Thing in a JSON document.
    • Traditional RDF (Resource Description Framework) in an XML document.
  • Use existing libraries, or build new ones, to interact and work with this data.

In fact I hope that the RDFS & JSON-LD combination might already be very close to the work we have done so far.

To date my playing and experimenting has been limited. I have started formulating an RDF Schema ( and have added some RDFa to the example Githubiverse site (, which appears to be readable according to the few tools I have tried out.

RDFa Source

Snippet from source html. New RDFa attributes highlighted.

Parsed RDFa

RDFa parsed using Green Turtle RDFa extension for Chrome browser.

There is a whole raft of things I still need to learn (help & guidance very welcome!), and this is still only an exploratory dive into Linked Data, but so far I feel it is worth pursuing further.

3D Printroom

3D Printroom

A small distraction I thought up a few days ago involves how we organise our STL and gcode files.  The Windows and Linux file explorers are rather limited when dealing with exotic files (no previews, no metadata). Additionally, I don’t know how other people manage their files, but mine are scattered across my main PC, my laptop and a NAS, within arbitrarily named folders filled with overly long file names (in an attempt to capture some metadata). It certainly has it’s limitations!

file chaos

Chaos! Chaos I tell you!

This seems ripe for an application of some sort (ignoring the patently absurd, if rational, argument to simply impose a little order on things).

I have always enjoyed using Adobe Lightroom to organise and work on my photos – it has several features that could be very useful when dealing with 3D printer files:

  • Directory scanning & sync
  • Arbitrary collections and groups
  • Adding additional metadata, including comments and ratings
  • Previews of non-standard files (e.g. RAW)
Adobe Lightroom

Adobe Lightroom – something to aspire to.

I wish I could announce that I have completely written a “Lightroom for 3D Printers” but at this stage I have only created a skeleton app with limited functionality:

  • Drop a directory in the top left panel to load it.
  • The top file view filters only STL files.
  • When an STL is selected the related gcode files are displayed below. This assumes the files start with the same name, e.g. cube.stl -> cube.gcode.
  • Any metadata in the gcode file is displayed in a table view (Slic3r only).
  • Double clicking a STL file displays a preview (thanks to Tony Buser’s Thingiview).
  • Double clicking a gcode file displays a preview (thanks to Joe Walnes’ gcode-viewer).

3D Printroom (Alpha!)

… and that’s it … so far.

The reason I am posting about this already is to share the idea, gauge interest, and to make the source code available in case anyone wants to pick it up and hack with it.

I chose to develop it with node-webkit, which allows you to develop with HTML + JS + CSS but still release a native binary. (This is necessary in order to use node.js so the local filesystem can be accessed). I’ve added a few notes on the github project on how to hack with it.  Or there is a Windows binary available to play with:

I’d be interested to hear if anyone thinks such an app would be useful.



Community Thing Tracker Network

Thing Tracker Network

A little time ago, RichRap started a discussion on the RepRap forum about a model repository for the project. My response to this crystallised a few thoughts I had on the subject since the Thingiverse debacle from last year. The basic concern being that, whilst creating a content repository is relatively straightforward from a technical perspective, there are other factors which will determine whether it will succeed. The most obvious being the catch-22 situation whereby users are attracted to a service that is being used by many other people, i.e.  where they are more likely to have their project/design discovered by other people. There is also a concern that any one provider evolves into a monopoly, depriving the community of diversity.


With all this in mind, I have been developing the idea a little more and would like to present the resulting proposal for consideration to the community. To do this I have created an information website which I hope can be used as a central hub for distribution and further development of the idea:


The site should have enough information to get the rough idea across, and so I shan’t repeat it all in this blog post, however a TL;DR summary may be helpful. The proposal simply suggests a way in which an ad-hoc Network of Things could be created. This is enabled by a short, flexible, and easy to implement specification, which, when followed, would allow a wide range of individuals and groups to participate in the network. Furthermore, the open nature of the specification should promote development of an ecosystem within the maker community, allowing a diverse set of applications and websites to emerge.


Use Cases

A couple of use-cases may help explain the potential of the proposal.

  • As an individual, when I create a design that I wish to share, I have several services available to me (Thingiverse, GrabCad, etc.). These being walled gardens means that anyone searching within one will not find results from another. Furthermore, search engines such as Google may result in my Thing being found, but its generic nature means that this is unlikely unless people use the right search terms. Therefore my options are either to limit the number of people who can discover my Thing, or submit, and maintain, the Thing to multiple repositories. Neither option is optimal, and the latter is particularly inefficient should my Thing evolve and I have to update the information in several places. Ideally I would submit my Thing to my favourite (based on features that I value) repository, and then have it discoverable by others, independent of where it is hosted. Having the option to self-host the Thing would also be good. What is missing at the moment is a standardised way for web applications to exchange data about Things.
  • As a member of a hackerspace, or other group, we would like to host and share our designs ourselves but still have them discoverable by a wide audience. Furthermore, we would like to provide much more detailed information, such as: bill of materials; instructions; and other information. In this case there may be a sub-Network of Things which has it’s own ecosystem but is still a part of the wider network.


Where Next?

To start dissemination of the proposal I made a brief post on Google+ and was pleasantly surprised at the positive feedback the idea received. I shall be collating this information and reflecting it in further revisions of the proposal and specification. In order to facilitate further discussions, without over-thinking the infrastructure, I have created a dedicated page on G+ to act as a blog and to collect discussions, etc.  There is also an email address:


The proposal is at a very early stage and is completely open to everyone,  so feel free to share your ideas, thoughts and opinions. If you feel that an open, distributed, Network of Things would benefit the maker community then please share the links, spread the word and join the discussion.

3D Modelling Materials

Custom Parts for Ikea Lillabo Train Set

The wood composite seems well suited for printing some custom parts I designed to supplement the Lillabo Train Set from Ikea.

The parts printed out well on the first go, with only minimal clean-up required due to the oozing. The circle part of the buffer still had a hole in it even though I set the solid layers to three (infill=20%), but adding a red sticker easily solves that.

It makes me think that a useful feature for a slicer would be to gradually increase the infill for the last few layers, i.e. 20% on layer n-3, 40% on n-2, 80 on n-1 and then 100% on the top.


More Wood Composite Observations

A quick post with a couple more observations on the wood composite together with some photos.

Sam’s Gears

Whilst printing a tray of the excellent Sam’s Gears by pleppik I wanted to reduce ooze so I increased the speed and also the temperature to 230°C, and midway the extruder jammed due to a blocked nozzle. At higher temperatures the composite turns darker and becomes harder, and it may have been this that caused the jam, or possibly some other detritus. It could also of course been caused by something other than the material, however this is the first time this nozzle has jammed. Generally going slower and cooler seems to be the best tactic.

The Venetian Lion by tbuser I printed scaled down 50% and 25%, and with support. The support came away quite easily, but I can’t compare this to other materials as this was the first print I have done with support. The temperature was set to 220°C and layer height was 0.24, this produced a finish more akin to brown PLA rather than a “wood” finish.

A couple of Simplified Gekko’s by CodeCreations were next to test the hypothesis that visible layers look better with the wood finish. Whilst not terribly beautiful prints, the colour and texture detail, particularly of the heads, are quite pleasing.

Dizingof’s Dragon Bowl came out quite nicely, but highlighted a problem which crops up frequently: the latest version of Slic3r seems to jump around a bit, particularly when infilling, and because of the ooze problem this results in imperfect finishes.

Dragon Bowl

I printed out 2 copies of the bowl, one at 220°C and another at 185°C. The latter is lighter in colour, this is also clearly visible in the Sam’s Gears print, where the crank handle was printed cool, but the gears hot. Another property I am seeing: prints at lower temperature are much more compressible (spongy) and flexible. The pins I printed for Sam’s Gears were so flexible they were not really usable (hence the bolts instead). The texture at lower temperature is also more “wooly” and woodlike. Several of the prints, such as the pin board, have quite a look of MDF about them.

Sam’s Gears, Crank Detail

Whilst testing how paint applies to the material I realised that I hadn’t yet tried painting regular PLA. Surprisingly, at least to me, the PLA also took the water-based acrylic paint I tried well too – so long as the surface was sanded beforehand. The wood composite took the paint well without sanding, apart from the bottom plane, which is quite glassy. This would need to be sanded before applying a coat, otherwise the paint peels off quite easily.


More Wood Composite Printing

A short post to give some more feedback about working with the wood composite. I decided to print out “Working Micro Gear Heart Keychain” by CrazyJaw.

Heart Gears – Under a Fluorescent Lamp

The stuff really sticks nicely to the bed without it having to be heated with no curling. The gear heart trays are quite small so I will try and print something larger to see how well it copes.

It acts largely like PLA, but oozes much more. I had to up my retract settings a little, and it still blobbed quite a bit. I was printing at 195°C and so it may be that a cooler hot-end might reduce the ooze. I chose 195°C because I got the impression at 185°C it came out a little rough. Now I have a couple of prints done I will be more daring and print at 180°C for longer.

An interesting observation is that the visibility of the layers, which is usually something we try and minimise, becomes a feature aesthetically. It gives the print more of a wood grain look. The photos were taken in a range of lighting conditions to try and convey how it looks.

Heart Gear – Daylight

The parts still feel, and act, like a polymer, which is to be expected, but it has a much rougher finish – at least at the temperatures I have been using so far.  Directly after printing it has a rather spongy feel and after a while it hardens somewhat, but it is generally a bit more rubbery and compressible than PLA.  Sanding, drilling and cutting is more similar to working with PLA than with wood, unsurprisingly. Sanding makes the surface much lighter and detracts from the overall appearance. I used various grades of paper, including wet&dry, to some success, but bringing out the Dremel makes it much lighter work, remembering to keep the revs nice and low.

I also drilled and tapped a small test piece for a 3mm bolt. It holds well though I would be wary of putting it under any serious load – as with PLA.

I also applied some water-based acrylic paint to a piece (not shown) and it appears to go one well. Once it’s dry I will report how it looks and holds.


So far it’s a fun material to work with, and the aesthetic quality will definitely add something new to 3D printing. From my very limited testing I would be wary about using it for precise or complicated engineering parts and would probably lean towards structural components. However, my machine doesn’t produce very high quality prints, plus I’ve only just begun playing with it, so there may be much more to this material than I have discovered so far.

I should state, the gears don’t actually turn, but this is due to my being unable to print out the pins to a sufficient quality. A tighter calibrated machine could well produce a working copy with this material.