robjsoftware.org

A blog about software – researching it, developing it, and contemplating its future.

Archive for the ‘Uncategorized’ Category

The Future Is Further Than You Think

leave a comment »

I failed to give a demo to Thomas Dolby tomorrow.  He’s visiting Microsoft, but my amazing music project Holofunk is currently not working, so I’ve missed a chance to show it to one of my heroes.  (I banged on it last night and it wouldn’t go, and I’ve learned from bitter experience that if it ain’t working 48 hours before go time, it’s time to bail.)

And therein lies a tale.  If I can’t have the fun with Mr. Dolby, I can at least step back and think about exactly why not, and share that with you all, many of whom have been my supporters and fans on this project for quite a few years.  Sit back and enjoy, this is going to be a bit long.

Holofunk: Amazing, yet Broken

Holofunk at its best is a pretty awesome wave-your-hands-and-sing-and-improvise-some-surprising-music experience.  I’ve always wanted to try to get it out into the world and have lots of people play with it.

But it uses the Kinect.  The Kinect is absolutely key to the whole experience.  It’s what sees you, and knows what you’re gesturing, and knows how to record video of you and not the background.  It’s a magic piece of hardware that does things nothing else can do for the price.

And the Kinect is a failure.  As astonishing as it is, it is still not quite good enough to be really reliable.  Holofunk has always had plenty of glitches and finicky interactions because of it, and really rapid motions (like video games tend to want you to make) confuse it.  So it just hasn’t succeeded in the marketplace… there are almost no new games or indeed any software at all that use it.  It doesn’t fully work with modern Windows 10 applications, and it’s not clear when it will.

Moreover, Holofunk also uses an audio interface standard called ASIO, that is actually quite old.  Support for it varies widely in quality.  In fact, support for sound hardware in general varies widely in quality… my current miseries are because my original audio interface (a Focusrite Scarlett 6i6) got bricked by a bad firmware upgrade, with tech support apparently unable to help; and an attempted replacement, a TASCAM US-2×2, has buggy drivers that blue-screen when I run my heavily multi-threaded, USB-intensive application.

So the bottom line here is:  Holofunk was always more technically precarious than I ever realized.  It’s probably kind of a miracle that I got it working as well as I did.  In the two-plus years since my last post, I actually did quite a lot with it:

  • Took it to a weekend rave in California, and got to play it with many of my old school techno friends from the nineties.
  • Demonstrated it at the Seattle Mini Maker Faire, resulting in the video on holofunk.com.
  • Got invited to the 2015 NW Loopfest, and did performances with it and a whole lineup of other loopers in Seattle, Portland, and Lincoln City.
  • Played it at an underground art event sponsored by local luminary Esse Quam Videri, even getting people dancing with it for the first time!

But then Windows 10 was released, and I upgraded to it, and it broke a bunch of code in my graphics layer.  I don’t hold this against Microsoft — I work for Microsoft in the Windows org, and I know that as hard as Microsoft works to keep backwards compatibility, there are just a lot of technologies that simply get old.  So after some prolonged inertia in the first half of 2016, I finally managed to get the graphics fixed up… but then my audio hardware problems started.

It’s now clear to me that:

  • Holofunk is an awesome and futuristic experience that really does represent a new way to improvise music.
  • But it’s based on technologies that are fragile and/or obsolescing.
  • So in its current form, I need to realize that it’s basically my baby, and that no one else is realistically going to get their hands on it.
  • Nonetheless, it is a genuine glimpse of the future.

And all of this leads me to the second half of this post:  I realized this morning that Holofunk is turning out to be a microcosm of my whole career at Microsoft.

Inventing The Future Is Only The Beginning

It’s often said about art that it cannot exist without an audience.  That is, art is a relationship between creator and audience.

I have learned over my career that a similar truth holds for technology.  Invention is only the beginning.  Technology transfer — having people use what you’ve invented — is in some ways an even harder problem than invention.

I got hired into Microsoft to work on the Midori technical incubation project.  I started in 2008, and we beavered away on an entirely new operating system (microkernel right up to web browser) for seven years, making staggering amounts of progress.  We learned what was possible in a type-safe language with no compromises to either safety or performance.

We got a glimpse of the future.

But ultimately the piper had to be paid, and finally it was time to try to transfer all this into Windows… and the gap between what was there already, and what we had created, was unbridgeable.  It’s not enough to make something new and wonderful.  You have to be able to use that new wonderful thing together with all the old non-wonderful stuff, because technology changes slowly and piece by piece (at least if you are in a world where your existing customers matter, as Microsoft very, very much is).

So now I am working on a team that is taking Midori insights and applying them incrementally, in ways that make a difference here and now.  It’s extremely satisfying work, since even though in some sense we’re reinventing the wheel, this time we get to attach it to an actual car that’s already driving down the road!  Shipping code routinely into real products is a lot more satisfying than working in isolation on something you’re unsure will actually get used.

The critical part here is that we have confidence in what we’re doing, because we know what is possible.  The invaluable Midori glimpse of the future has given us insights that we can leverage everywhere.  This is not only true for our team; the Core CLR team is working on features in .NET that are based entirely on Midori experience, for example Span<T> for accessing arbitrary memory with maximal efficiency.

So even though it is going to take a lot longer than we originally thought, the ideas are living on and making it out into the world.

Holofunk in Metamorphosis

Microsoft has learned the lessons of the Kinect with its successor, HoloLens.  The project lead on both Kinect and HoloLens is very clear that HoloLens is going to stay baking in the oven until it is irresistibly delicious.  The Kinect launched far too soon, in his current view.  It was a glimpse of the future, and it gave confidence in what was possible, but the world — and all the software that would make its potential clear — was not ready.

So now I view Holofunk as part of the original Kinect experiment.  Holofunk in its current form may never run very well (or even at all) for anyone other than me, or on anything other than the finicky hardware I hand-pick.

But now that I’ve admitted this to myself, I am starting to have a bajillion ideas for how the overall concepts can be reworked into much more accessible forms.  For instance:

  • Why have the Kinect at all?  The core interaction in Holofunk is “grab to record, let go to start looping.”  So why not make a touchscreen looper that lets you just touch the screen to record, and stop touching to start looping?
  • Moreover, why not make it a Universal Windows Platform application?  If I can get that working without the ASIO dependency, suddenly anyone with a touch-screen Windows device can run it.
  • I can also port it to HoloLens.  I can bring Unity or some other HoloLens-friendly 3D engine in, so now I will be free of the graphics-layer issues and I’ll have something that’ll work on HoloLens out of the box.
  • I can start on support for networking, so multiple people could share the same touch-screen “sound space” across multiple devices.  I really would need this for HoloLens, as part of the Holofunk concept has always been that an audience can see what you are doing, but HoloLens will have no external video connections for a projector or anything.  So a separate computer (perhaps with a Kinect!) will need to be involved to run the projector, and that means networking.

All of these ideas are much, much more feasible given my existing Holofunk code (which has a bunch of helpful stuff for managing streams of multimedia data over time), and my Holofunk design experience (which has taught me more about gestural interface design than many people have ever learned, all of which will be immediately applicable to a HoloLens version).

I’ve had a glimpse of the future.  It’s a fragile glimpse, and one which I can’t readily share.  But now that I’ve accepted that, I can look towards the next versions, which if nothing else will be much easier for the rest of the world to play with.

Holofunk is going into a cocoon, and what emerges is going to be something quite different.

Thanks to everyone who’s enjoyed and supported me in this project so far — it’s brought me many new friends and moments of genuine musical and technical joy.  I look forward to what is next… no matter how long it takes!

 

 

Written by robjellinghaus

2016/10/20 at 08:40

Posted in Uncategorized

To John Carmack: Ship Gamer Goggles First, Then Put Faces And Bodies Into Cyberspace

with 3 comments

John Carmack, CTO of Oculus Rift, tweeted:

Everyone has had some time to digest the FB deal now. I think it is going to be positive, but clearly many disagree. Much of the ranting has been emotional or tribal, but I am interested in reading coherent viewpoints about objective outcomes. What are the hazards? What should be done to guard against them? What are the tests for failure? Blog and I’ll read.

I have already blogged on this but will make this a more focused response to John specifically.

Here are my objective premises:

  • VR goggles, as currently implemented by the Rift, conceal the face and prevent full optical facial / eye capture.
  • VR goggles, ACIBTR, conceal the external environment — in other words, they are VR, not AR.
  • Real-time person-to-person social contact is primarily based on nonverbal (especially facial) expression.
  • Gamer-style “alternate reality” experiences are not primarily social, and are based on ignoring the external environment.

Here are my conclusions:

  • A truly immersive social virtual environment must integrate accurate, low-latency, detailed facial and body capture.
    • Therefore such an environment can’t be fundamentally based on opaque VR goggles, and will require new technologies and sensor integrations.
  • Opaque VR goggles are, however, ideal for gamer-style experiences.
    • Gamer experiences have never had full facial/body capture, and are based on ignoring the external environment.
  • The more immersive such experiences are, the more people will want to participate in them.
    • This means that gratuitous mandatory social features, in otherwise unrelated VR experiences, would fundamentally break that immersion and would damage the platform substantially.
  • Goggle research and development will mostly directly benefit “post-goggle” augmented reality technology.

The hazards for Rift:

  1. If Facebook’s monetization strategy results in mandatory encounters with Facebook as part of all Rift experiences, this could break the primary thing that makes Rift compelling: convincing immersion in another reality that doesn’t much overlap with this one.
  2. If Facebook tries to build an “online social environment” with Rift as we have historically known them (Second Life, Google Lively, PlayStation Home, Worlds Inc., etc., etc., etc.), it will be as niche as all those others. Most importantly, it will radically fail to achieve Facebook’s ubiquity ambitions.
    • This is because true socializing requires full facial and nonverbal bandwidth, and Rift today cannot provide that. Nor can any VR/AR technology yet created, but that’s the research challenge here!
  3. If Facebook and Rift fail to pioneer the innovation necessary to deliver true augmented social reality (including controlled perception of your actual environment, and full facial and body capture of all virtual world participants), some other company will get there first.
    • That other company, and not Facebook, will truly own the future of cyberspace.
  4. If Rift fails to initially deliver a deeply immersive alternate reality platform, it will not get developers to buy in.
    • This risk seems smallest based on Rift’s technical trajectory.

What should be done to guard against them:

  1. Facebook integration should be very easy as part of the Rift platform, but must be 100% completely developer opt-in. Any mandatory Facebook integration will damage your long-term goals (creating the first true social virtual world, requiring fundamentally new technology innovation) and will further lose you mindshare among those skeptical of Facebook.
  2. Facebook should resist the temptation to build a Rift-based virtual world. I know everyone there is itching to get started on Snow Crash, and you could certainly build a fantastic one. But it would still be fundamentally for gamers, because gamers are self-selected to enjoy surreal online places that happen to be inhabited by un-expressive avatars.
    • The world has lots of such places already; they’re called MMOGs, and the MMOG developers can do a better job putting their games into Rift than Facebook can.
  3. Facebook and Rift should immediately begin a long-term research project dedicated to post-goggle technology. Goggles are not the endgame here; in a fully social cyberspace, you’ll be able to see everyone around you (including those physically next to you), faces, bodies, and all. If you really want to put your long-term money where your mouth is, shoot for the post-goggle moon.
    • Retinal projection glasses? LCD projectors inside a pair of glasses? Ubiquitous depth cameras? Facial tracking cameras? Full environment capture? Whatever it takes to really get there, start on it immediately. This may take over a decade to finally pan out, but you have the resources to look ahead that far now. This, and nothing less, is what’s going to make VR/AR as ubiquitous as Facebook itself is today.
  4. Meanwhile, of course, ship a fantastic Rift that provides gamers — and technophiles generally — with a stunning experience they’ve never had before. Sell the hardware at just over cost. Brand it with Facebook if you like, but try to make your money back on some small flat fee of title revenue (5%? as with Unreal now?), so you get paid something reasonable whether the developer wants to integrate with Facebook or not.

Tests for failure:

  1. Mandatory Facebook integration for Rift causes developers to flee Rift platform before it ships.
  2. “FaceRift” virtual world launches; Second Life furries love it, rest of world laughs, yawns, moves on.
  3. Valve and Microsoft team up to launch “Holodeck” in 2020, combining AR glasses with six Kinect 3’s to provide a virtual world in which you can stand next to and see your actual friends; immediately sell billions, leaving Facebook as “that old web site.”
  4. Initial Rift titles make some people queasy and fail to impress the others; Rift fails to sell to every gamer with a PC and a relative lack of motion sickness.

John, you’ve changed the world several times already. You have the resources now to make the biggest impact yet, but it’s got to be both a short-term (Rift) and long-term (true social AR) play. Don’t get the two confused, and you can build the future of cyberspace. Good luck.

(And minor blast from the past:  I interviewed you at the PGL Championships in San Francisco fifteen years ago.  Cheers!)

Written by robjellinghaus

2014/03/28 at 12:25

Posted in Uncategorized

My Take On Zuckey & Luckey: VR Goggles Are (Only) For Gamers

with 2 comments

I am watching the whole “Facebook buys Oculus Rift” situation with great bemusement.

I worked for a cyberspace company — Electric Communities — in the mid-nineties, back in the first heady pre-dot-com nineties wave of Silicon Valley VC fun.

We were building a fully distributed cryptographically based virtual world called Microcosm. In Java 1.0. On the client. In 1995. We had drunk ALL THE KOOL-AID.

Image

(Click that for source.  For a scurrilous and inaccurate — but evocative — take on it all, read this.)

We actually got some significant parts of this working — you could host rooms and/or avatars and/or objects, and you could go from space to space using fully peer-to-peer communication. Because, you see, we envisioned that the only way to make a full cyberspace work was for it to NOT be centralized AT ALL. Instead, everyone would host their own little bits of it and they would all join together into an initially-2D-but-ultimately-3D place, with individual certificates on everything so everyone could take responsibility for their own stuff. Take that, Facebook!!!

(I still remember someone raving at the office during that job, about this new search engine called Google… the concept of “web scale” did not exist yet.)

The whole thing collapsed completely when it became clear that it was too slow, too resource-intensive, and not nearly monetizable enough. I met a few lifelong friends at that job though, quite a few who have gone on to great success elsewhere (Dalvik architect, Google ES6 spec team member, Facebook security guru…).

I also worked at Autodesk circa 1991, in the very very first era of VR goggles, back when they looked like this:

virtual_realityx299

Look familiar?  This was from 1989.  Twenty-five frickin’ years ago.

So I have a pretty large immunity to VR Kool-Aid. I actually think that Facebook is likely to just about get their money back on this deal, but they won’t really change the world. More specifically, VR goggles in general will not really change the world.

VR goggles are a fundamentally bad way to foster interpersonal interaction, because they obscure your entire face, and make it impossible to see your expression. In other words, they block facial capture. This means that they are the exact worst thing possible for Facebook, since they make you faceless to an observer.

This then means that they are best for relatively solitary experiences that transport you away from where you are. This is why they are a great early-adopter technology for the gamer geeks of the world. We are precisely the people who have *already* done all we can to transport ourselves into relatively solitary (in terms of genuine, physical proximity) otherworldly experiences. So VR goggles are perfect for those of us who are already gamers. And they will find a somewhat larger market among people who want to experience this sort of thing (kind of like super-duper 3D TVs).

But in their current form they are never going to be the thing that makes cyberspace ubiquitous. In a full cyberspace, you will have to be able to look directly at someone else *whether they are physically adjacent or not*, and you will have to see them — including their full face, or whatever full facial mapping their avatar is using — directly. This implies some substantially different display technology — see-through AR goggles a la CastAR, or nanotech internally illuminated contact lenses, or retinally scanned holograms, or direct optical neural linkage. But strapping a pair of monitors to your eyeballs? Uh-uh. Always going to be a “let’s go to the movies / let’s hang in the living room playing games” experience; never ever going to be an “inhabit this ubiquitous cyber-world with all your friends” experience.

Maybe Zuckerberg and Luckey together actually have the vision to shepherd Oculus through this goggle period and into the final Really Immersive Cyberworld. But my guess is the pressures of making enough money to justify the deal will lead to various irritating wrongnesses. Still, I expect they will ship a really great Oculus product and I may even buy one if the games are cool enough… but there will be goggle competitors, and it’s best to think of ALL opaque goggle incarnations as gamer devices first and foremost.

So why did Zuckerberg do this deal? I think it’s simple: he has Sergey Brin envy. Google has its moon-shot projects (self-driving cars, humanoid robots, Google Glass). Zuckerberg wants a piece of that. It’s more interesting than the Facebook web site, and he is able to get his company to swing $2 billion on a side project, so why not? Plus he and Luckey are an epic mutual admiration society. That psychology alone is sufficient explanation. It does lead to the amusingly absurd paradox of Facebook spending $2 billion on something that hides users’ faces, but such is our industry, and such has it ever been.

Realistically, the jury is still out on whether Oculus would have been better off going it alone (retaining the love of their community and their pure gaming focus, but needing to raise more and more venture capital to ramp up production), or going with Facebook (no more worries about money, until Facebook’s ad-based business model starts to screw everything up). The former path might have cratered before launch, or succumbed to deeper-pocketed competitors. The latter path has every chance of going wrong — if Facebook handles things as they did their web gaming efforts, it definitely will. We will see whether Zuckerberg can keep Facebook’s hands off of Oculus or not. I am sadly not sanguine… on its face, this is a bad acquisition, since it does not at all play to the technology’s gaming strengths.

It’s worth noting Imogen Heap’s dataglove project on Kickstarter.  I was skeptical that they would get funded, but their AMA on Reddit convinced me they are going about it the best way they can, and they have a clear vision for how the things should work.  So now I say, more power to them!  Go support them!  They are definitely purely community-driven, the way Oculus was until yesterday….

Written by robjellinghaus

2014/03/26 at 08:48

A Publication! In Public!

with 7 comments

The team I work on doesn’t get much publicity.  (Yet.)

But recently some guys I work with submitted a paper to OOPSLA, and it was accepted, so it’s a rare chance for me to discuss it:

Uniqueness and Reference Immutability for Safe Parallelism

This is really groundbreaking work, in my opinion.  Introducing “readable”, “writable”, “immutable”, and “isolated” into C# makes it a quite different experience.

When doing language design work like this, it’s hard to know whether the ideas really hold up.  That’s one epic advantage of this particular team I’m on:  we use what we write, at scale.  From section 6 of the paper:

A source-level variant of this system, as an extension to C#, is in use by a large project at Microsoft, as their primary programming language. The group has written several million lines of code, including: core libraries (including collections with polymorphism over element permissions and data-parallel operations when safe), a webserver, a high level optimizing compiler, and an MPEG decoder. These and other applications written in the source language are performance-competitive with established implementations on standard benchmarks; we mention this not because our language design is focused on performance, but merely to point out that heavy use of reference immutability, including removing mutable static/global state, has not come at the cost of performance in the experience of the Microsoft team. In fact, the prototype compiler exploits reference immutability information for a number of otherwise-unavailable compiler optimizations….

Overall, the Microsoft team has been satisfied with the additional safety they gain from not only the general software engineering advantages of reference immutability… but particularly the safe parallelism. Anecdotally, they claim that the further they push reference immutability through their code base, the more bugs they find from spurious mutations. The main classes of bugs found are cases where a developer provided an object intended for read-only access, but a callee incorrectly mutated it; accidental mutations of structures that should be immutable; and data races where data should have been immutable or thread local (i.e. isolated, and one thread kept and used a stale reference).

It’s true, what they say there.

Here’s just a bit more:

The Microsoft team was surprisingly receptive to using explicit destructive reads, as opposed to richer flow-sensitive analyses (which also have non-trivial interaction with exceptions). They value the simplicity and predictability of destructive reads, and like that it makes the transfer of unique references explicit and easy to find. In general, the team preferred explicit source representation for type system interactions (e.g. consume, permission conversion).

The team has also naturally developed their own design patterns for working in this environment. One of the most popular is informally called the “builder pattern” (as in building a collection) to create frozen collections:

isolated List<Foo> list = new List<Foo>(); 
foreach (var cur in someOtherCollection) {
    isolated Foo f = new Foo();
    f.Name = cur.Name; // etc ... 
    list.Add(consume f); 
}
immutable List<Foo> immList = consume list;

This pattern can be further abstracted for elements with a deep clone method returning an isolated reference.

I firmly expect that eventually I will be able to share much more about what we’re doing.  But if you have a high-performance systems background, and if the general concept of no-compromises performance plus managed safety is appealing, our team *is* hiring.  Drop me a line!

Written by robjellinghaus

2012/11/02 at 14:06

Posted in Uncategorized

SlimDX vs. SharpDX

with 11 comments

Phew!  Been very busy around here.  The Holofunk Jam, mentioned last post, went very well — met a few talented local loopers who gave me invaluable hands-on advice.  Demoed to the Kinect for Windows team and got some good feedback there.  My sister has requested a Holofunk performance at her wedding in Boston near the end of August, and before that, the Microsoft Garage team has twisted my arm to give another public demo on August 16th.  Plus I had my tenth wedding anniversary with my wife last weekend.  Life is full, full, FULL!  And I’m in no way whatsoever complaining.

Time To Put Up, Or Else To Shut Up

One piece of feedback I’ve gotten consistently is that darn near everyone is skeptical that this thing can really be useful for full-on performance.  “It’s a fun Kinect-y toy,” many say, “but it needs a lot of work before you can take it on stage.”  This is emerging as the central challenge of this project: can I get it to the point where I can credibly rock a room with it?  If I personally can’t use it to funk out in an undeniable and audience-connected manner, it’s for damn sure no one else will be able to either.

So it’s time to focus on performance features for the software, and improved beatboxing and looping skills for me!

The number one performance feature it needs is dual monitor support.  Right now, when you’re using Holofunk, you’re facing a screen on which your image is projected.  The Kinect is under the screen, facing you, and the screen shows what the Kinect sees.

This is standard Kinect videogame setup — you are effectively looking at your mirrored video image, which moves as you do.  It’s great… if you’re the only one playing.

But if you have an audience, then the audience is looking at your back, and you’re all (you and the audience) looking at the projected screen.

Like this — and BEHOLD MY PROGRAMMER ART!

No solo performer wants their back to the audience.

So what I need is dual screen support.  I should be able to have Holofunk on my laptop.  I face the audience; the laptop is between me and the audience, facing me; I’m watching the laptop screen and Holofunking on it.  The Kinect is sitting by the laptop, and the laptop is putting out a mirror-reversed image for the projection screen behind me, which the audience is watching.

Like this:

With that setup, I can make eye contact with the audience while still driving Holofunk, and the audience can still see what I’m doing with Holofunk.

So, that’s the number one feature… probably the only major feature I’ll be adding before next month’s demos.

The question is, how?

XNA No More

Right now Holofunk uses the XNA C# graphics library from Microsoft.  Problem is, this seems defunct; it is stuck on DirectX 9 (a several-year-old graphics API at this point), and there is no indication it will ever be made available for Windows 8 Metro.

I looked into porting Holofunk to C++.  It was terrifying.  I’ll be sticking with C#, thanks.  But not only is XNA a dead end, it doesn’t support multiple displays!  You get only one game window.

So I’ve got to switch sooner rather than later.  The two big contenders in the C# world are SlimDX and SharpDX.

In a nutshell:  SlimDX has been around for longer, and has significantly better documentation.  SharpDX is more up-to-date (it already has Windows 8 support, unlike SlimDX), and is “closer to the metal” (it’s more consistently generated directly from the DirectX C++ API definitions).

As always in the open source world, one of the first things to check — beyond “do the samples compile?” and “is there any API documentation?” — is how many commits have been made recently to the projects’ source trees.

In the SlimDX case, there was a flurry of activity back in March, and since then there has been very little activity at all.  In the SharpDX case, the developer is an animal and is frenetically committing almost every day.

SharpDX’s most recent release is from last month.  SlimDX’s is from January.

Two of the main SlimDX developers have moved on (as explicitly stated in their blogs), and the third seems AWOL.

Finally, I found this thread about possible directions for SlimDX 2, and it doesn’t seem that anyone is actively carrying the torch.

So, SharpDX wins from a support perspective.  The problem for me is, it looks like a lot of DirectX boilerplate compared to XNA.

I just, though, turned up a reference to this other project ANX — an XNA-compatible API wrapper around SharpDX.  That looks just about perfect for me.  So I will be investigating ANX on top of SharpDX first; if that falls through, I’ll go just with SharpDX alone.

This is daunting simply because it’s always a bit of a drag to switch to a new framework — they all have learning curves, and XNA’s was easy, but SharpDX’s won’t be.  So I have to psych myself up for it a bit.  The good news, though, is once I have a more modern API under the hood, I can start doing crazy things like realtime video recording and video texture playback… that’s a 2013 feature at the earliest, by the way 🙂

Written by robjellinghaus

2012/07/18 at 23:54

Posted in Holofunk, Uncategorized

Holofunkarama

with one comment

Life has been busy in Holofunk land!  First, a new video:

While my singing needs work at one point, the overall concept is finally actually there:  you can layer things in a reasonably tight way, and you can tweak your sounds in groups.

Holofunk Jam, June 23rd

I have no shortage of feature ideas, and I’m going to be hacking on this thing for the foreseeable future, but in the near term:  on June 23rd I’m organizing a “Holofunk Jam” at the Seattle home of some very generous friends.  I’m going to set up Holofunk, demo it, ask anyone & everyone to try it, and hopefully see various gadgets, loopers, etc. that people bring over.  It would be amazing if it turned into a free-form electronica jam session of some kind!  If this sounds interesting to you, drop me a line.

Demoing Holofunk

There have been two public Holofunk demos since my last post, both of them enjoyable and educational.

Microsoft had a Hardware Summit, including the “science fair” I mentioned in my last post.  I wound up winning the “Golden Volcano” award in the Kinect category.  GO ME!  This in practice meant a small wooden laser-etched cube:

This was rather like coming in third out of about eight Kinect projects, which is actually not bad as the competition was quite impressive — e.g. an India team doing Kinect sign language recognition.  The big lesson from this event:  if someone is really interested in your project, don’t just give them your info, get their info too.  I would love to follow up with some of the people who came, but they seem unfindable!

Then, last weekend, the Maker Faire did indeed happen — and shame on me for not updating this blog in realtime with it.  I was picked as a presenter, and things went quite well, no mishaps to speak of.  In fact, I opened with a little riff, aand when it ended I got spontaneous applause!  Unexpected and appreciated.  (They also applauded at the end.)

I videoed it, but did not record the PA system, which was a terrible failure on my part; all the camera picked up was the roar of the people hobnobbing around the booths in the presentation room.  Still, it was a lot of fun and people seemed to like it.

My kids had a great time at the faire, too.  Here they are watching (and hearing) a record player, for the very first time in their lives:

True children of the 21st century 🙂

Coming Soon

I’ll be making another source drop to http://holofunk.codeplex.com soon — trying to keep things up to date.  And the next features on the list:

  • effect selection / menuing
  • panning
  • volume
  • reverb
  • delay
  • effect recording
  • VST support

Well, maybe not that last one quite yet, but we’ll see.  And of course practice, practice, practice!

Written by robjellinghaus

2012/06/09 at 00:20

Posted in Holofunk, Uncategorized

Science fair time!

leave a comment »

Holofunk has been externally hibernating since last September; first I took a few months off just on general principles, and since then I’ve been hacking on the down-low.  In that time I’ve fixed Holofunk’s time sync issue (thanks again to the stupendous free support from the BASS Audio library guys).  I’ve added a number of visual cues to help people follow what’s happening, including beat meters to show how many beats long each track is, and better track length setting — now tracks can only be 1, 2, or a multiple of 4 beats long, making it easy to line things up.  Generally I’m in a very satisfying hacking groove now.

And today Holofunk re-emerges into the public eye — I’m demoing at a Microsoft internal event dubbed the Science Fair, coinciding with Microsoft’s annual Hardware Summit.  Root for me to win a prize if you have any good karma to spare today 🙂  I’ll post again in a day or two with an update on how it went.

I’ve also applied to be a speaker at the Seattle Mini Maker Faire the first weekend in June — will find out about that within a week.  If that happens, then I’ll spread the word as to exactly when I’ll be presenting!

Written by robjellinghaus

2012/05/10 at 06:33

Posted in Holofunk, Uncategorized