Software Development

One of my biggest gripes with the App Store is not being able to contact customers who leave a review directly.

A tweet by @fafner (developer of the App MindNode) today, August 13th 2014 in which he asked if developers read reviews on the App Store, made me think about this some more.

The one thing I really miss about selling Apps on my own, outside of the App Store, is the contact you have with your customers.
If there was a problem with one of my Apps, they had to contact me directly, since there was no other way. And we could take things from there, have an ongoing stream of communication.

With the App Store, customers are inclined to leave a review of my App with feature requests, bug reports or more general criticism rather than contact me directly. Even though I make it very easy to write me through my website and the Apps themselves.

While I really appreciate every review, there’s nothing more frustrating than getting a review about, say, a request of a feature that, unknown to the reviewer, has already been implemented and not being able to tell them about it (replying to the review with another review of your own app is possible, but there’s little to no chance the customer will ever read it, plus you’d have to rate the App to do so and that opens up an entirely different can of worms (in short: don’t do it)).
Or even worse, you get a bug report and you can’t contact them for more information in order to reproduce it.

What I currently do when this happens is fire up google and search for the reviewer’s nickname – a more than often lengthy procedure. When a Twitter, Facebook, YouTube, tumblr (and so on) account finally comes up, I use that to contact them, well aware it might not even be them – it has happened before that I contacted someone and they were the completely wrong person. It’s embarrassing, but they usually understand and think nothing of it.

It takes a lot of time and nerves that’s only worth it if you have the right person in the end. Otherwise, that time could have been so much better spent.
Additionally, you’re never entirely sure if they check their messages on YouTube, for example.

I understand why the App Store doesn’t allow for direct contact from the developer to the customer. First and foremost, it’s a privacy issue and that’s more important now than it ever was.
Threaded comments on the App Store seem unappealing to me as well plus it could escalate quickly if the customer or developer gets upset for some reason, so threads would have to be curated somehow. Also, other people could chime in in what was meant to be a two-way communication. Unfavorable as well.

Nevertheless, a solution on Apple’s part would be favorable. Actually, I’ve filed bug reports (radars) with Apple on how they could improve this.

They are based on the premise that not the developer initiates contact, but the customer does (and why wouldn’t they want to – they bought the App, they want it to work).

One (rdar://13367865) is to pop up a “Contact Developer” button when a user selects two or less stars for a review on the App Store. It might also be based on keywords (crap, useless, sh*t come to mind ;))
So the user selects one star and before they can click send, another button is shown directly next to it asking the reviewer to contact the developer. Problem solved.

The other one (rdar://13379347) is for crashes. You know how, when an Apple App crashes, you get a text area to supply some more information and send that to Apple?
This could also be done for third-party Apps.
The developer could supply their support email in the Info.plist (a collection of metadata for the App, like version, copyright info, etc) in the App’s bundle.
When the crash happens, you get a crash report window. Additionally to the buttons “Reopen” and “Cancel”, there could be a “Contact Developer” button, if the email has been supplied in the plist. You click it and it opens up a new mail message with the crash report already attached, leaving the possibility for more info (or it is done in-window like the Apple App Crash dialog).

Developers do get crash reports through Apple’s iTunes Connect, but that’s all they get. There’s no contact information attached (again because of privacy issues, of course).

It surprises me that not more work has been done in that area on the App Store.

—-
My name is Matt, and I’m the developer of Eternal Storms Software. If you’d like to comment, you can catch me on twitter here: [twitter-follow screen_name=’eternalstorms’ show_count=’yes’] or by eMail.

Read more

(I apologize for the delay with this post, I had to work on updates for Yoink, ScreenFloat and Briefly and of course lots of work happened on flickery 😉 ).

To finally continue with my promised series of blog posts about flickery 2.0’s development, here I am back with: Uploading.
A minor disclaimer before we start: Any interface you see below isn’t anything near final, subject to change, etc etc. 

If there is one part that’s integral to any flickr client, it has to be uploading. So with 2.0, I want to make darn sure that it’s the best you can get.
Let’s dive in! 

Activity (Background Uploading)

In flickery 1.x, uploading blocked the entire app so you couldn’t do anything else but wait. Sure, you got a beautiful sheet window to look at with the currently uploading image slowly transitioning from blurry to focused representing the upload progress, but it wasn’t very user-friendly.
I wanted to change that with 2.0, so now uploads will be in the background, in the app-wide Activities panel where you can track the progress and potential errors of Up- and Downloads.

Activityview

It’s out of the way, you can close it and leave it running in the background (and of course the little image also transitions from blurry to focused 😉 ). When the activity (Up- or Download) is finished, you’ll get an OS X notification letting you know the activity is done.

Notification

Another advantage of background uploads is that you can queue them up. You can begin one Upload and while that is running work on the next. And when you start that one, it will wait for any previous Uploads to finish and then begin uploading.

Adding to Photosets and Groups

Adding to Photosets or Groups for uploads is not very straightforward in flickery 1.x for Uploads. You can’t assign photosets right from the Upload view but have to wait until the upload finishes and the button bar at the bottom changes to “Add to Set” and “Add to Group”. By the time the upload is finished, you might not know anymore which photos you’d like to add to which photoset or you forget altogether. It’s not really a “set it, then forget it” approach.

This is going to change in flickery 2.0. Here, you’ll be able to set everything up beforehand and once the upload itself is done, photos are added to the photosets and groups you have set. So you can set everything up and leave it to flickery to do its thing.

Screenshot of flickery  03 04 2014 18 02 34

Location and Maps

In flickery 2.0, it will be even easier to assign locations to selected items (both photos and videos) using Apple’s MapKit features.

Screenshot of flickery  03 04 2014 18 20 01

The big improvement here is that you can now search for locations and flickery will reverse-geocode that info into a latitude/longitude pair that Maps can understand. If you have a camera without GPS and want to assign a specific location, this comes in very handy.

Sorting

I’m adding better sorting to flickery with version 2.0. As you can see in the screenshot below, you can sort by Date and Title (booooring) but also by location, for example, based on the distance from the selected item. Items without a location will be sent to the end of the list.

Screenshot of flickery  03 04 2014 18 26 00

Using Google’s Maps APIs (because Apple’s reverse-geocoding is a little bit limited), flickery will also let you sort by Country, City and ZIP/Postal Code.
This comes in very handy when you’re deciding which photosets to add to, for example, or just for the general order of the uploaded items and how they will appear on flickr.

Sadly, Google’s APIs are limited as well (2,000 requests per day per IP address), but it’s better than Apple’s limitation (which gives out after about 50 consecutive requests – bug report filed! – and yes, I am not geocoding all items at the same time but one after the other, as “demanded” in Apple’s CoreLocation documentation). But I don’t think too many will run into issues with that.

Filtering

Dealing with many items for upload, it can become overwhelming trying to keep track of everything. With flickery 2.0 I want to simplify that by implementing filters, so you can quickly view only photos or videos, items with or without description, tags, location and if you’ve assigned a photoset/group or not.

Menu

That wil make it easy for you to keep on top of everything and make sure you’ve done everything you had planned for your upload.

Importing unsupported files

Just like in flickery 1.x, flickery 2.0 will let you quickly convert RAW files (which flickr doesn’t accept) to JPEG files, trying to keep as much EXIF data as possible (which is new in 2.0).

Import

Trimming videos has been in flickery 1.x as well, but is now done with AVKit instead of QuickTime and is quicker and more reliable. It offers the same trimming interface you already know (and hopefully, love) from QuickTime Player:

Trim

Editing

I’m not sure yet about how deep I want to go into editing (if at all) with flickery 2.0. I figure, most items will come from iPhoto or Aperture anyway, already post-processed by an app that can do it better and is focused on it more than flickery ever would be.

Perhaps I will include very basic editing functionality like rotating and mirroring, but I think that will be about it.

Quick Uploads

With all this detailed uploading stuff, there needs to be a way to quickly upload something into your stream, a photoset or a group.

So with flickery 2.0, when you drag an image- or video file onto your stream, a photoset or a group it won’t fuzz about and just directly, quickly upload it without you having to do anything else.

 

I think that’s it for this time, I’ll see you next time with some more insights on flickery’s development 🙂

Thank you for reading!
Take care and I hope to see you again next time 🙂

—-
My name is Matt, and I’m the developer of Eternal Storms Software. You can follow me on twitter here: [twitter-follow screen_name=’eternalstorms’ show_count=’yes’]

Read more

As I am now back working on flickery after a short hiatus – sometimes you need a little distance from a project to be able to reflect on it and maybe get a new perspective on some things, I figured I’d talk about the process as I am working on it, not after, in a so-called “post-mortem”, as my memory is now fresh and “in medias res”.

Before going into about what’s coming though, allow me to go back to see what’s been.

A New Beginning

Starting from scratch is usually admitting to having screwed up before. In flickery’s case, I wouldn’t say I screwed up completely. I am, however going to say that I could have done some things better. In some parts, a lot better.

Creation Date of flickery 1

I started developing flickery 1.0 in early 2008. That’s a long time ago. I’ve become (I hope) a better programmer with a deeper understanding of how things work and are supposed to work. I have (a lot) more experience in Cocoa and a better understanding of UX and UI. I don’t let myself get away with things that work ok but could be better that easily anymore.
This is a good time to start from scratch.

Why start from scratch?

flickery 1 crashlog

They say the way your desk looks, your mind looks. If you apply that to the code base of flickery 1.x, you’d commit me. It is not a pretty sight.
It’s really hard to look at, it’s difficult to follow and understand… it’s a fine mess, to be honest.

So one part of the reason why I started from scratch is that I didn’t see the forest for the trees (the app for the code) anymore.

Additionally, things have changed a lot in OS X since 2008. Which makes version 2.0 a good point to get rid of all the legacy code (individual code paths for OS X Leopard and every iteration that came after) and create clean code for OS X Mavericks and newer. (Mavericks is a no-brainer, it’s a free upgrade, everyone should go and get it.)

There’s blocks, ARC and Objective-C 2.0, just to name few. Going back to flickery 1.0’s code and re-working all the parts might very well have taken just as long (if not longer) as starting anew. It was totally worth it.

New Back-End

ESSflickr classes in flickery 2

Absolutely essential for flickery is it’s communication with flickr’s API. Incidentally, in flickery 1.x, that was the #1 cause of crashes and bugs.
I’m not going to tell you how I did it back then; it’s just too embarrassing.

This time around, I’m not making the same mistakes again (all the while hoping I’m not making any new ones).

Let’s just say, in the old backend, I handed NSDictionaries et.al. back and forth. I didn’t use any XML or JSON parsing, I did it all with NSString’s rangeOfString, etc., not always checking if a range existed before working with it. As I said – embarrassing.

So the first thing I did was switch from getting XML from flickr’s API to JSON and not parsing it myself, but letting the system do it. That’s a huge load off my mind right there.

Secondly, I’m done with handing NSDictionaries, NSArrays, etc., around. The new back-end returns proper objects that contain the necessary info in an easily accessible and understandable way.

There are a few Objective-C wrappers for flickr’s API out there, but I didn’t like any of them. They let you do this (pseudo-code)

NSDictionary *responseDict = [SomeFlickrWrapper executeMethod:@“flickr.photos.search” withParameters:someDictionary];

Aside from also handling OAuth and signing methods, etc., this is all they let you do. So you have to remember the methods, the parameters – it’s really not a wrapper, it’s an access point to the API.

What makes my back-end different is this:

- (ESSflickrGallery *)createGalleryWithTitle:(NSString *)title
description:(NSString *)description
primaryPhoto:(ESSflickrPhoto *)photo
error:(ESSflickrError **)alert;

You call the method and the back-end does all the parameter and method stuff for you. It’s much neater. In this example, you can also see the use of custom classes instead of dictionaries. ESSflickrGallery, ESSflickrPhoto and ESSflickrError are all wrappers for responses from the API, neatly packed up in an easily accessible class.

I guess I could have used one of the available back-ends as a back-end for my back-end, but I figured, if I’m going through the trouble of matching all available flickr-API-methods, I might as well do the OAuth and signing stuff myself as well.

This new back-end doesn’t only make things easier to develop, read and understand, it also improves the app’s stability and performance (mostly because it’s easier to develop, read and understand 😉 ).

As you can imagine, this has taken up the most time up until now. Now comes the fun part of using it to create the next version of the app.

I hope you’ll enjoy this upcoming series of posts about the development of flickery 2.0 and some of the design decisions behind it.

Upcoming in Part 2

In part 2, I’ll talk about uploading to flickr with flickery 2.0. Lots of suggestions will make it into the new version 😉

Thank you for reading. Enjoy your day!

My name is Matt, and I’m the developer of Eternal Storms Software. You can follow me on twitter here.

Read more

Please join me on a journey through the past 2 years, from the idea of Briefly to the development and eventual release of the app 🙂

The Idea for Briefly

The idea for Briefly was literally fed to me on a golden spoon.

Well, not literally. Literally, it was hurled at my head with a tweet by a twitter user I sadly can’t remember (believe me, I’ve tried digging up that tweet again, but I wasn’t successful).

I only remember the content (and I’m paraphrasing, because I don’t remember it word by word)

It should be way easier to do this [link to a still motion video I can’t find anymore either]

And that’s where I got the idea, because the tweeter was right – it should be easier to do this kind of video.

Getting the idea for Briefly was, thus, a case of “right place, right time” – in this case: twitter.

Existing Apps?

You might say – why create this app? There’s iMovie (on both OS X and iOS), there’s Final Cut Pro… Why develop an app that does less?

As I stated in my (also-pseudo) post-mortem of Transloader, I believe in making apps (even if they’re small ones) that improve a particular workflow or fix a particular need.

So yes, you can use iMovie to make a still motion video fairly easy. But I figured it could be even easier and faster. Thus I stuck with the idea and started development of Briefly.

It was a gut decision to go for it as I don’t have time nor the will to do much (or any, really) market research and stuff like that. I’d rather spend my time working on code.
So it could have gone either way – have a good or miserable reception (and even if you do market research, it still can go either way).

I always pick projects this way. There are two types of apps I develop:
Apps I have a (sometimes desperate) need for myself (ScreenFloat, Yoink, Transloader) and apps I think others might have a need for (flickery, Briefly).
On apps I need I usually start working right away. If I release them to the public is a different question, though. If I think it’s too niche or too small I usually don’t bother.
When I come up with an idea for an app that others might benefit from, I usually let it ripe in my head for a few weeks. I find that if I let it work out in my head, I either have less and less doubts about it or more and more. If I have too many doubts, I discard the idea for now.
If the picture of the app becomes clearer through this process, I make a move on it and start working. 

Still Motion Videos?

I hear you say – Still Motion videos? What’s that?

After I released Briefly, I’ve been told that Still Motion videos aren’t really a thing, that it’s not a correct term, that time lapse would be the right term to use.

Yes, with Briefly you can create time lapse videos. Or stop motion videos. But what do those two types of videos have in common?
The photos are connected to each other. They are a series of photos correlating with each other, creating an animation or sense of movement.

But that was not the main focus for me (contrary to what you might think watching Briefly’s intro trailer).
The focus, for me, was still motion videos, which basically are the same as stop motion videos (as in, photos shown in rapid succession) but the photos don’t have to correlate with each other. They can be of completely different things.

Because that’s what the video in the tweet I got the idea from was. It was a video of hundreds of vacation photos shown for just a fraction of a second with a soundtrack. That became my goal for this app.

Plus, it’s a term on wikipedia. So it must be right. Right? Right? 😉

Development

I began development of Briefly in October 2011. That’s not a typo. It took me almost two years to release this app. Here’s why:

First off, Briefly is not my only app.
In October 2011, I have flickery and ScreenFloat to take care of. I’m also working on YoinkTransloader and flickery 2, with Yoink being in the final stages of development (released in late 2011) and Transloader taking me more than another year to complete. flickery 2 is still in development.

Secondly, Briefly depends on two frameworks I created.
One is for flickery 2 (because in Briefly, you can import photos from your photosets on flickr) so I had to complete certain portions of flickery 2 before I could continue development on Briefly.
The other one is ESSVideoShare (available as open source here). I had to create that framework first so I could implement sharing functionality in Briefly.
And last but not least, support for Instagram, which took some time as well. There were a few changes in the API when I wrote the integration, so I had to keep adapting.

That alone took me a couple of weeks to get done.

Prototyping

I started off creating the backend of Briefly, which is the video creation part. 

Screenshot of ZlidezFirst interface for Briefly, then called Zlidez

I began developing the backend with QTKit, which, in hindsight, was not a good decision. Actually, it cost me two weeks, only to decide later to use AVFoundation instead.
AVFoundation is undoubtedly the future of audiovisual media frameworks on OS X and iOS. When you watch the WWDC videos, the message is clear. Not using it was a mistake because QTKit would get deprecated some time and, more importantly, is not available on iOS. When I started development of Briefly, I hadn’t considered bringing it to iOS as well, but it later struck me that it would be a good fit for the platform.

Moving to AVFoundation from QTKit

Moving to AVFoundation from QTKit was a matter of delete and re-write. AVFoundation is quite a different beast and it didn’t make much sense trying to refactor what I had written for QTKit.

So I started from scratch. AVFoundation has a steep learning curve – at least to me. “Simple” things like adding an image to a movie is very different from QTKit, where it’s a matter of

– (void)addImage:(NSImage *)image forDuration:(QTTime)duration withAttributes:(NSDictionary *)attributes

This is far more complex in AVFoundation. It’s a matter of setting up an AVAssetWriter, AVAssetWriterInput, AVAssetWriterInputPixelBufferAdaptor and adding your samples to that pixelBufferAdaptor.

Quality Output

I spent the most time of the development on assuring Briefly creates the best possible output – which should be the priority #1 for any content-creation app, anyway.

First and foremost, I wanted to eliminate the occurrence of black borders in the video itself (if they occur in fullscreen playback, that’s nothing I can avoid) all while keeping it rendering quickly (especially on iOS devices).
I was in contact with three photographers to see what they thought about it and they helped shape the output a great deal. I must admit, I don’t know much (if anything at all) about photography so I was dependent on their input.

Soundtracks

Originally, I had planned to include a couple of royalty-free soundtracks with Briefly. But I seem to have bad luck communicating with license holders in general.

I have quite some experience regarding this, coming from GimmeSomeTune. No one wanted to talk to me about that, either, and the same happened with Briefly.
I wanted to get in touch with about 5 different “soundtrack-websites”. Not one replied. So I moved on. For now, users can choose soundtracks from their hard drives or iTunes library.

This might change with a future update as, since the release of Briefly, someone got in touch with me about that. We’ll see where it goes.

At least we made first contact 🙂

What was important to me regarding soundtracks is that they automatically fade out if they are longer than the video, without the need of input from the user.

Designing Briefly

My goal was to be able to create still motion videos hassle-free in as little steps as necessary.

For Briefly, that was:
1) Choose the photos
2) Choose the soundtrack

(I don’t count clicking “Create” a real step).

I wanted an app where you could do everything you needed in one window. No panels, no inspectors, etc.
‘No preferences’ plays into that as well.
I don’t want users to have to think about what speed is best, in what order the photos should be displayed (Briefly automatically sorts added photos by date), what format it should be, if the soundtrack should fade out at the end over what period, that sort of thing.

The only real adjustment users can make is the video resolution (Auto, 240p, 360p, 480p, 720p, 1080p).
It was actually only an output test for myself and I had planned not to use it in the release, only providing the “Auto” option. I can’t say exactly why I changed my mind and decided to leave it in. Probably because I wanted to let users know somehow that Briefly would produce videos at these resolutions and if “Auto” was chosen, Briefly would figure out the best one.
The popup is a slight break from the “Briefly figures it out for you”-concept.

Briefly's first interface conceptFirst concept for Briefly’s interface (Briefly was then called PostHaste)

This first concept of Briefly’s interface is pretty close to what I released, though obviously, it was optimized to put the content in focus, not the interface elements.

For example, we (my designer, Alexander Käßner and me) fused the huge “No soundtrack selected” area into the bottom bar, between the Quality-popup and the Create button, making more place for user-selected photos.

The first iteration of the interface was this:

First iteration of Briefly's interface with wooden texturesFirst iteration of Briefly’s interface with wooden textures

When Alex and I started out designing the interface, I was pretty keen on the wooden textures. I don’t know why. I just liked them (and I still do, though I must say the release version looks much nicer).

Notice that the buttons Browse… and Import… are in the middle of the window and would vanish when photos were added. So once the user selected photos, additional photos could only be added through the File menu in the menu bar or drag and drop. Not a good design decision (especially for novice users), so we put the buttons in the title bar of the window (relatively close to release of the app) so they would always be accessible – except when in fullscreen mode, then they would disappear.

The screenshot above also shows the import view (which is pretty close to what shipped in the release version).
The import view started out as a window sheet, attached to the main window (like an open/save dialog). And it would have been fine, but there’s one problem – it makes drag’n’drop difficult because a sheet usually covers an area of the window it’s attached to.
I wanted users to be able to drag and drop from the import sheet to the main window and make multiple imports without the sheet closing every time. So the sheet had to go.
We put the import sheet into the main window as a view, so the content below it resizes when it’s shown and now users can drag and drop from the import view to the main window and click on “Import” multiple times without dismissing the import view.

More wooden BrieflyAnother, earlier screenshot of a wooden Briefly

Pretty close before release, I decided against the wooden textures. I think I found it too distracting. The focus should always be on what the user’s working on, not the interface.

Dark draft of BrieflyA darker version of Briefly

What you see in the screenshot above is a first draft of a darker, less distracting Briefly. It is quite a difference. And one more iteration of this darker interface is what’s in the release version today.

Flatter Briefly UIWe decided to go a little flatter and less texture-y.

We got rid of the title- and bottom-bar noise-texture and enlarged the soundtrack view a little bit to align with the top of the bottom bar and the bottom of the window giving it more space to breathe.

First Launch Experience

When you first launch Briefly, this is what you see:

First launch BrieflyBriefly’s UI when launched the first time

The popover you see in the screenshot used to be a sort of tutorial, instead of the arrows and explanations you can see underneath in the release version.
It would jump from UI element to UI element, explaining what each element does. Unnecessary. With the way it is now (the arrows), it’s much better.
For one because it’s always there when the window is empty, letting the user know what to do without them having to open up the tutorial again (and perhaps struggling to find a way to do so).
Secondly because a popover that jumps around is nauseating.

Saving

I designed Briefly as a one-shot app – add photos, a soundtrack, create the video and repeat with different photos and soundtrack. An app where you wouldn’t go back to past projects and redo them.

There’s no projects, no documents, no saving.

The only saving that happens is of content that hasn’t been turned into a video yet. So if you select photos and/or a soundtrack and then quit Briefly, those items would still be there on the next launch.

This could easily be turned into a Briefly-document to be edited later at any time and I might do so with a future version of Briefly.
The data saved is very small because all it does is take the paths (actually, NSURL bookmarks) and save them for later use.

Naming Briefly

Before finally coming up with and settling on Briefly, I went through a number of names.

The first name I had was “Zlidez – the z stands for zpeed”. I moved away from that because… it sucks.
Another was PostHaste – though already taken, as I later found out.

The name I absolutely loved was Momento – a portmanteau of momentum (for speed) and memento (i.e., photos). Sadly enough, this was also already taken.

Names before BrieflyBrainstorming session for Briefly’s name (notice that it isn’t on there)

more of briefly's former namesA closer selection of Briefly’s former names

As with most of the names for my apps, Briefly came to me out of the blue and I think it works very well.

Afterthoughts

I am very happy with how the app turned out. The reception has been great, although there are complaints about the rate at which photos are shown in the videos created with Briefly (although it says so in the description of the app on the App Store – “photos are shown for the fraction of a second”).

It got featured by Apple (and as of this writing, still is) on the Mac App Store front page in the “New & Noteworthy” category:

Apple Feature

and received nice reviews on TUAWCult Of Mac and lots of others.

Working with Alex Käßner again has been a blast, but I knew it would be, coming off of Transloader, where he did excellent work as well.

Now we’re working on Briefly for iOS, which will be available later this year for iOS 7.

Thank you for reading,
Take care,
Matt

[twitter-follow screen_name=’eternalstorms’ show_count=’yes’]

 

Read more