OpenSCAD Visual editor, Brett Victor style

I feel uncomfortable with traditional click-n-drag visual editors, whatever it is a pixel-based editing in Photoshop or 3D editing in Tinkercad.

I’m spending time editing something, but comparing to development, result is very “fragile” and depends on exact sequence of pre-planned events and relies on precise drag-and-drop capabilities and heavy use of undo even with years of mouse handling skills.

The gap is currently addressed by OpenSCAD, a language that allows you to generate 3D models so they can be parametrized and can power tools like Thingiverse Customizer so you can make adjustable prosthetics for kids who grow up (and dimensions need to adjust).

I feel that ultimate solution in this very creatively imaginative and visual space heavily mixed with scientific and precise engineering is in combining the two in Bret Victor-esque way (here’s a video in which he talks about his approach and shows some game development tool prototypes which influenced or directly contributed to Swift Playgrounds, if I understand it correctly):

Imagine editing using visual editor on the right of the screen, adding and removing objects, moving them around, intersecting and so on like you would in Tinkercad, but on the left, seeing the code as it changes with every change you make.

For example, you drag the object and coordinates in the code change, or you press a button to combine one object with another and it starts a function and puts first object as first parameter, waiting for you to pick second object.

Needless to say, if you edit the code, visual editor would also update to reflect the changes. And if you point at a number in the code and drag up down, it updates the code and corresponding visual at the same time

This can be combined with timeline feature similar to the one Fusion 360 has:

but in our case as it works with code, it would not only roll-back/replay visual objects, but will also visually represent OpenSCAD commands highlighting corresponding part of the code when you click or hover over a step. It can also act as version control for the code too (imagine branching, merging and pull-request workflow of git integrated with it).

Just though I’d share before it goes into non-being.

Concept: Window into a virtual world

When I first visited ITP show back in 2005 or so, I realized that I am interested in physical computing and when going home from the show I came up with the idea of a project with ITP spirit – a window into a virtual world.

The idea was to use a notebook or a tablet PC with a compass and gyroscope contraption attached to it to browse 360 degree panoramas so the viewport of the panorama matches the direction of the screen. This way moving the viewport, user will be able to see the “virtual world on the other side of the portal”.

You can guess why I’m writing about this concept right now – because I couldn’t imagine back then that device like that would be available to consumers. And now I’m using it to type this post, yes, I’m talking about Apple iPad ;)

So, getting back to the concept – now there is no need for custom hardware which I was thinking is necessary 5 years ago and all is needed now is an iPad application.

OpenGL sphere with 360 panorama mapped to it as a texture plus some (probably sophisticated) logic to make device scroll the panorama based on accelerometers and compass.

Combine it with some sci-fi panoramas or with some real-estate panoramas and you have a cool product that blows peoples mind and gets all the blogging attention you need to make a first million in the app store.

Another app for real estate brokers that can go along with this one is panorama maker – just point you camera around the room (maybe in video mode) and augmented with direction and tilt data it can help automate panorama creation which broker (or “for sale by owner” enthusiast) just uploads to property’s site.

All that is a bit optimistic, obviously, as panorama creation uses some hardcore math and requires significant image processing and often manual involvement, but it can definitely be aided by dimensional data.

Let me know if anything like that already exists or if you’re interested to give it a try – I’d pay for such app for sure! Can do that in advance using kickstarter.com if you like ;)

Comments are always welcome! And don’t try to be easy on my feelings, tell me what wrong here ;)

Concepts: Automated site speed-up service. Really!

First, a short intro – I’ve been thinking of what to do with all the ideas I’m coming up and I’d like to try posting blog entries under “Concepts” category. I’ll accept comments and will write additions to the concept there as well. We’ll see what it’ll be like ;)

On my way home today, I was thinking again about asset pre-loading (example, example with inlining) after page on-load event (for faster subsequent page loads) for ShowSlow and realized that it can be created as a very good “easily installable with one line of JavaScript” service!

I think all the key components to this technology already exist!

First we need to know what to pre-load and here comes Google Analytics API and it’s Content / ga:nextPagePath dimension that will give us all the probabilities for next pages that users visit.

Now, when we have page URLs, we need to understand which assets to load from those pages and that can be solved by running headless Firefox with Firebug and NetExport extension configured to auto-fire HAR packages at beacons on the server.

HAR contains all the assets and all kind of useful information about the contents of those files so tool can make infinitely complex decisions regarding picking the right URL to pre-load from simple “download all JS and CSS files” to “only download small assets that have infinite expiration set” and so on (this can be a key to company’s secret ingredient that is hard to replicate). This step can be done on a periodic basis as to run it in real time is just unrealistic.

The last piece is probably the most trivial – actual script tag that asynchronously loads the code from the 3rd party server with current page’s URL as parameter which in turn post-loads all the assets into hidden image objects or something to prevent asset execution (for JS and CSS).

So, all user will have to provide is site’s homepage URL and approve GA data import through OAuth. After that, data will be periodically re-synced and re-crawled for constant improvement.

Some additional calls on the pages (e.g. at the top of the page and at the end of the page) can measure load times to close feedback loop for asset picking optimization algorithm.

It can be a great service provided by Google Analytics themselves, e.g. “for $50/mo we will not only measure your site, but speed it up as well” – they already have data and tags in people’s pages, the only thing left is crawling and data crunching that they do quite successfully so far.

Anyone wants to hack it together? Please post your comments! ;)