Apple Clips Is A Tip Of The Iceberg.
Today March 24th, 2017 Apple announced the release of a new social video production app called Clips . For making and shared short-form videos with with text, effects, graphics, and more. You add content by recording it on the spot or grab a video clip or photo from your library. Live Titles let you easily create animated captions — just by talking as you record using a Voice First Siri speech to text conversion. You can choose from a variety of styles, all perfectly sequenced to the sound of your voice.
You can choose from a number of filters that flatters or enhances the video. There is the ability to drop in an emoji and to apply overlays like speech bubbles and arrows. Finally the apps allows for the user to set the mood with music that automatically adjusts to the length of your video.
The videos can than be shared in Messenger, Apple’s SMS-like system and YouTube, Instagram, Facebook, Twitter and Snapchat. This effectively puts Apple in-between the social networks and the user in a very Apple-like way. One can think of this as a short-form iMovie platform primarily designed for the younger cohort of social media shearers.
In some general ways one can look at this as a way to break Snapchat and perhaps Instagram maybe even Vine, I think this is not the aim. This is Apple building universal tools that enhance these social platforms and could likely be deeply integrated. Clips is in some ways similar to apps Ripl and Flyr. However in many ways both apps are in themsleves unique.
More Than A “Millennial Video App”
The interesting part is when the video is completed, Apple’s Machine Learning will look at any people in the video and compare this to your picture library that have been identified previously. This is a rather powerful aspect of Clips and it allows for Apple to show off internal and encrypted facial recognition technology. This is a combination of RealFace, Faceshift and Emotient. The sharing recommendation is a wonderful first step use case of these technologies with-out the privacy issues a public facial recognition creates that many feel is creepy. With Clips it is your local data and you get to the powerful ability to auto-share using Apple’s unique AI.
The use of Siri Speech recognition to insert text in real time synchronized to the actual audio is a very simple yet powerful feature that is also very Apple-like. Subtitles in a short-form video is just about required in social sharing situations as many will stream in platforms like Facebook with the sound off, having the having the sound off yet showing subtitles with effects of what is being said in sync is not available on any platform. This is a powerful integration of how Siri speech to text can work when deeply integrated into the app.
There Is An App Store For That
There is a hint that Clips will either have its own App Store and/or share the Sticker App store from Apple’s Messenger. Released last year, the Sticker App store has been a big hit with a significant number of regular users. This would be a powerful feature that will enlarge the ecosystem and add some amazing motion stickers to Clips videos.
Thus it seems very likely we will see a readymade developer ecosystem built around Clips. By WWDC 2017 I believe this feature will be turned on. I also believe there may be a developer opportunity to create filters and effects for the videos as well as clip art videos for added effects. There is a likelihood this may be its own Filter App store for Clips. It seems prudent Apple hold this aspect until WWDC 2017 to see how uptake would be.
Apple created Clips for a subset of iPhone users, it is not likely to be pleasing to a larger cohort, however this is a perfect test bed for Apple AI and ML along with Voice First interactions that clearly will continue. This is the example of the AI when some claim that “Apple has no AI”.
History will record that the convergence of technologies in Clips telegraphed Apple’s Augmented Reality future products.