We stretched this release out a bit so that we could add two major new features: the Reporting Node and the Inception Node. Both are things we’ve wanted to add for quite a while, but other more pressing matters always got in the way.
Generally speaking, people want to see some sort of graphical summary of their data. While there’s nothing stopping you from using the reporting tool of your choosing with Transdata, we’ve added a node that will export data directly to Zoho Reports. We chose Zoho for its ease of use, professional features, and availability of a free version.
Using the Reporting Node is easy. Simply connect the outputs of any nodes you want to send to the same Zoho Reports database, put in your account credentials, and run your model. Want to send data to multiple databases (or use multiple Zoho accounts)? No problem. Just use multiple Reporting Nodes. Data is sent every time you run your model, so your reports will always be up to date.
We went back and forth (we argued) on what to name this node for a couple weeks. Model Node was just confusing. Nesting Node tends to conjure up visions of horrible Excel formulas or IF statements. That’s not what’s happening here.
The Inception Node is a self-contained model-as-a-node. Build a model and save it as you normally would, then load it into a different model as a single node. You could think of it as a loadable module, or a black box, or just a way to tidy things up. It’s a great way to reduce the complexity of a large model. Break up the pieces into separate models and then load them as Inception Nodes.
When you run your model containing an Inception Node, the nested model receives input data, executes, and then sends output data back out into the parent model. If the nested model is saved with input data (and you don’t choose to overwrite it) it will be loaded and used as it normally would.
- More import options
- Fixes to caching
- Animations on nodes
When we tell people that Transdata is not built on top of an existing database, the standard response is an immediate: “Why not?”. If you are going to store, retrieve, and transform data, why would you go to the trouble of reinventing the wheel? After all, these days there are plenty of choices. Surely one of them would work, right?
The answer is twofold:
1. It presented an interesting challenge.
2. By starting from scratch, we could guarantee that we wouldn’t be held back by any of the inherent limitations of existing paradigms and that future innovation and improvement would be possible.
The first reason is largely what brought me to the project from my graduate work in artificial intelligence – clearly no small subject change. Getting to create a totally new system in a field which is not known for change promised to be fascinating. On a side note, I worked with graph-like data structures, so this isn’t really such a big leap. The second reason is the one that really matters to users. One of the main goals for Transdata, from the very beginning, was to make sure that the tool was flexible enough that it didn’t hinder the sort of tasks that data workers find themselves doing day in and day out. Things like combining disparate data sources and cleaning up messy or inconsistent data. The sorts of things that are painful (or impossible) to do in spreadsheets. Ragged data was a requirement from the start.
It wasn’t immediately obvious that we would start from scratch, but we quickly discovered that existing databases don’t offer the flexibility that we needed, or, even worse, they attempt to do everything. They would require so much abstraction between what the user was doing and the actual data manipulation that any efficiency would be completely lost. Whenever possible, I like
to write software so that what is happening under the hood is as close to what the user sees as possible. In Transdata, data is stored and moved around very much like in the flowchart you see in your model. Not only is that good from a UX standpoint, but it keeps me sane.
So, was it the right call? Well, we have yet to run into any unsolvable data issues, our codebase is comparatively small and manageable, we get great performance, and we’ve never had anything break due to a change in an external database. Works for me.