Unity3D's quality settings have a drop-down for "Anisotropic Filtering", and one of those settings is "Forced On". But what level of anisotropy does it force to?
Experimentation tells me that it forces to 8x. If your texture is already set to use anisotropic filtering at a level above 8x, it will remain at that level rather than being downgraded.
This was tested with Unity 5.6.2p4.
Friday, October 13, 2017
Tuesday, October 3, 2017
Unity3D Matrix Layout
I had trouble finding a reference for this, so I wanted to post a table of the Unity matrix component layout. The member names don't go the obvious direction for XY coordinates.
m00 m01 m02 m03
m10 m11 m12 m13
m20 m21 m22 m23
m30 m31 m32 m33
03, 13, 23 = Translation X, Y, Z
00, 11, 22 = Scale X, Y, Z
m00 m01 m02 m03
m10 m11 m12 m13
m20 m21 m22 m23
m30 m31 m32 m33
03, 13, 23 = Translation X, Y, Z
00, 11, 22 = Scale X, Y, Z
Tuesday, September 19, 2017
Unity3D Performance With InvokeRepeating()
The current project we're working on has a number of places where a Unity MonoBehavior's Update() function is basically just being used to keep a timer going, with every so often actual work happening. I suggested moving stuff like this into a call to InvokeRepeating with an appropriate frequency, and generally met with some skepticism. Research online led to the same sort of thing, with very few actual numbers and a few places where people treated it like a vampire treats sunlight. "Reflection! Hissss!"
These kind of reactions sounded extreme to me, and "it's slower" doesn't tell you much, so I wanted to actually measure it.
I created an empty Unity project, and created a scene with 2100 static objects arranged in a grid. Each object is a textureless sphere, and has one component called PerformanceTest. The component is written in C#. This component contains a start() method, and another method that does a few useless things just so it's not empty: Allocate a Vector3(), check its magnitude, multiply that result up and down a few times. This method is either Update() or UpdateInvokeRepeating() depending on the test. I also ran a test with no update method present at all, just an empty start() method.
When InvokeRepeating is used in this test, their timings and delays are all set the same, to make the difference as obvious as possible. With 2100 of these in the scene it's pretty easy to see results in the Unity profiler. These were tested on a Ryzen 1600 running at 3.3 GHz.
Nothing Startup: CoroutinesDelayedCalls 3.78ms, containing PerformanceTest.Start() 1.26ms
Update() Startup: CoroutinesDelayedCalls 3.78ms, containing PerformanceTest.Start() 1.26ms
InvokeRepeating() Startup: CoroutinesDelayedCalls 5.23ms, containing PerformanceTest.Start() 2.75ms
You can see here that upon startup, calling InvokeRepeating 2100 times cost approximately 1.5ms. The other two startup methods were present, but totally empty. Since the method name is provided as a string, this is the point where a search will have to happen, so this could potentially get slower if you have a lengthy class.
Nothing Update: BehaviorUpdate 0.0ms (of course)
Update() Update: BehaviorUpdate 2.34ms, containing PerformanceTest.Update() 0.92ms
InvokeRepeating() Update: CoroutinesDelayedCalls 3.44ms, containing PerformanceTest.UpdateInvokeRepeating () 0.94ms
Stacking all the InvokeRepeating calls does indeed lose to a straight Update, with the overhead appearing to be roughly 50% higher than Update has. Notable is that CoroutinesDelayedCalls is not present in the list on frames where no invocations occur, and the Unity generic "Overhead" does not meaningfully change either. There isn't an obvious ongoing performance penalty between invocations.
What this suggests to me is that unless your repeated invocation is more frequent than every other frame, you're probably getting a net win by using InvokeRepeating rather than Update() if possible. While that may not be worth it if you need to have the overhead of Update() being present anyway, it doesn't have a dramatic startup cost and its additional overhead is easily overcome if your component doesn't need to update every frame.
These kind of reactions sounded extreme to me, and "it's slower" doesn't tell you much, so I wanted to actually measure it.
I created an empty Unity project, and created a scene with 2100 static objects arranged in a grid. Each object is a textureless sphere, and has one component called PerformanceTest. The component is written in C#. This component contains a start() method, and another method that does a few useless things just so it's not empty: Allocate a Vector3(), check its magnitude, multiply that result up and down a few times. This method is either Update() or UpdateInvokeRepeating() depending on the test. I also ran a test with no update method present at all, just an empty start() method.
When InvokeRepeating is used in this test, their timings and delays are all set the same, to make the difference as obvious as possible. With 2100 of these in the scene it's pretty easy to see results in the Unity profiler. These were tested on a Ryzen 1600 running at 3.3 GHz.
Nothing Startup: CoroutinesDelayedCalls 3.78ms, containing PerformanceTest.Start() 1.26ms
Update() Startup: CoroutinesDelayedCalls 3.78ms, containing PerformanceTest.Start() 1.26ms
InvokeRepeating() Startup: CoroutinesDelayedCalls 5.23ms, containing PerformanceTest.Start() 2.75ms
You can see here that upon startup, calling InvokeRepeating 2100 times cost approximately 1.5ms. The other two startup methods were present, but totally empty. Since the method name is provided as a string, this is the point where a search will have to happen, so this could potentially get slower if you have a lengthy class.
Nothing Update: BehaviorUpdate 0.0ms (of course)
Update() Update: BehaviorUpdate 2.34ms, containing PerformanceTest.Update() 0.92ms
InvokeRepeating() Update: CoroutinesDelayedCalls 3.44ms, containing PerformanceTest.UpdateInvokeRepeating () 0.94ms
Stacking all the InvokeRepeating calls does indeed lose to a straight Update, with the overhead appearing to be roughly 50% higher than Update has. Notable is that CoroutinesDelayedCalls is not present in the list on frames where no invocations occur, and the Unity generic "Overhead" does not meaningfully change either. There isn't an obvious ongoing performance penalty between invocations.
What this suggests to me is that unless your repeated invocation is more frequent than every other frame, you're probably getting a net win by using InvokeRepeating rather than Update() if possible. While that may not be worth it if you need to have the overhead of Update() being present anyway, it doesn't have a dramatic startup cost and its additional overhead is easily overcome if your component doesn't need to update every frame.
Wednesday, March 22, 2017
Encoding Animated GIFs from a WebRTC Video Stream
We recently released a project for a client where we integrated WebRTC video chat. The goal was to make an app for both Android and iOS that could cross connect to run a simple multiplayer game with a live video stream. The details of getting WebRTC up and running on both platforms are for another post, but here I'm going to focus on one specific client request for this project: recording video.
For reference, a lot of the original research and experimentation was carried out with Pierre Chabardes' AndroidRTC project and Gregg Ganley's implementation of Google's AppRTC demo. We used the most recent versions of libjingle_peerconnection at the time of development (Android (Maven) 11139, iOS (Cocoapods) 11177.2.0), which are not actually the most recent WebRTC sources.
Originally, we discussed going the whole nine yards: having a running video buffer with maybe the last minute or so of video, including sound, that we could save off to a .h264 MP4 video or some such. WebRTC streams video and audio in two separate streams, and the SDKs for Android and iOS don't easily expose the audio stream.
For the sake of development time, we decided to restrict our video recording to simple animated GIFs. Even though this was a vast simplification, it still proved to be a large development headache, especially on Android. On iOS, at least, StackOverflow has some pretty straightforward answers, like this one from rob mayoff. It was just a matter of getting things threaded and then we were off and running.
Actually, before I get to the GIF encoding, let me take a step back. Where are the frames we're going to use coming from? On both platforms, WebRTC has a public interface that feeds a series of custom I420Frame objects from the backend streaming to the frontend rendering. The I420Frames are really just YUV images. Documentation is light, but we were able to dig through the WebRTC source, at least. For Android, we have the VideoRenderer, which contains both the I420Frame class definition and the VideoRenderer.Callbacks interface, which is what actually gets handed a frame. On the iOS side, we have the RTCVideoRenderer, which has a renderFrame method that can be overridden to get at the I420Frame (in this case called RTCVideoFrame). More generally, the UIView you would actually use is a RTCEAGLVideoView, which can just be inherited, and you can grab the I420Frame when renderFrame is called.
Android is, again, trickier. When you receive a new remote video stream from WebRTC, you need to have a VideoRenderer.Callbacks implementation wrapped in a VideoRenderer object that you apply to the stream. The Android SDK provides a helper class (org.webrtc.VideoRendererGui) with static methods to create VideoRenderer.Callbacks implementations that can draw to a GLSurfaceView. However, this implementation doesn't really play nice with inheritance like things do on iOS. Fortunately, you can add multiple renderers to a video stream. So we created our own implementation of VideoRenderer.Callbacks, wrapped it in a VideoRenderer, and added and removed it from the remote video stream as needed. Now renderFrame would be called on it, and we had access to the I420Frame. NOTE: We discovered we had to call VideoRenderer.renderFrameDone() at the end of renderFrame to clean things up. The WebRTC SDK creates a separate I420Frame object for each video renderer, and each is responsible for its own cleanup. Otherwise, you'll end up with a mysterious memory leak.
So all of that is done, and now we're getting I420Frame objects as they're sent over the remote video stream, which we can copy to a local streaming buffer, data store, or whatever you like for later. But again, these are YUV images, not typical RGB, which means they need to be converted before they can actually be encoded using any sort of standard GIF library. On iOS, this is comparatively easy. Google developed a YUV converter that lives in the WebRTC library, and we can just use that. We grabbed the header files, and then we could just use the various functions to copy frames (libyuv:I420Copy) and convert to RGB (libyuv:I420ToABGR). Note the swapped order of ABGR. iOS image generation expects RGBA, but empirical testing showed that the endian-ness was swapped, and converting with ABGR on the WebRTC side resulted in correctly ordered bytes when fed to iOS libraries. StackOverflow again has answers for getting a usable UIImage out of a byte array, such as this one by Ilanchezhian and Jhaliya.
As is a running theme here, Android was not so easy. Technically, it has the same YUV converter buried in the native library, but we're operating in Java, and things are not easily exposed at that level. It turned out to be way easier to write a YUV converter class than try to get at the internal conversion utility. Starting from this StackOverflow answer by rics, we created YuvFrame.java, which we've posted here. (Edit 2/2020: when we upgraded our project to use Google's WebRTC library, we had to make a different YuvFrame.java that's compatible with the library. Also, here's an Objective-C version, I420Frame.)
Finally, we're at the point of actually saving the collection of WebRTC video frames to an animated GIF. I discussed the iOS method earlier. I also leave it as an exercise to the reader to record the variable framerate of the video stream and apply the frame timing reasonably to the animated GIF. The main discussion is once again Android.
We started out with a Java-based GIF encoder with high color accuracy. This got the job done well, but it had a drawback: on somewhat older devices, like the Nexus 5, encoding 2 seconds of video at 10fps with 480x480px frames (20 of them) could take upwards of 3 minutes to complete (though to be fair, with lots of background processes closed and a fresh boot, it could be down to 1 minute 15 seconds). Either way, this was unacceptable. All our tests on iOS, even with an older iPhone 5, showed much better quality encoding in 10-15 seconds. Step one was the increase the thread priority, since we were using an AsyncTask, which defaults to background thread priority and takes up maybe 10% of the CPU. Bumping this up to normal and even high priority got us around a 40% speed increase. That's a lot, and given that the majority of phones have multiple CPU cores, it didn't affect the video stream performance. However, our actual target was a 6 second animated GIF at 15fps, which means 90 frames to encode. The next step was to dig up an NDK-based GIF encoder. This got us a further speed increase, and we were looking at just over a minute for the full 90 frame encode.
I instrumented the whole encoding process, and there were two major time sinks: creating a color palette for each frame, and converting the frame to the color palette. The former was maybe 20% of the frame encode time, while the latter was 70%-75%. I played around a bit with global color palettes and only generating a new color palette every few frames. The former resulted in a pretty bad quality reduction in certain cases, but when I generated the color palette once every 5 frames and stored it for the intervening frames, I got a decent amount of speed back without a serious drop in quality. Still, this was only affecting the operation that comprised a bit of the total frame encoding time. Actually going through all the pixels of each frame and finding the best match in the color palette was the most intensive part.
I can't say I came up with the idea (that credit belongs to Bill), but I did implement our final solution. We multi-threaded the process of palettizing the frames. We checked the device to see how many CPU cores it had (combining StackOverflow answers from both David and DanKodi), then set the encoding thread count to one less than that (so the video stream keeps running). We segmented the frame by rows into however many threads we had to work with, and proceeded to palettize each segment concurrently. Now you may be asking, what about dithering? Well, strictly speaking, this method results in a slightly lower quality frame because we can't do dithering quite the same way. We dithered each segment as normal, and for the later segments, we used the (un-dithered) row from the previous segment as a basis. On its own, this would result in artifacts along the lines between segments. So after all the threads were done, we did one more custom dithering pass along the boundary lines between segments to use the final dithered values from the previous segment to update the first row in the next segment. This pretty much smoothed out all the noticeable artifacts.
We forked Wayne Jo's android-ndk-gif project with this new encoding method. This got us yet another 40% increase in encoding speed, bringing us under 40 seconds on average to encode 90 frames on an old Nexus 5, which we deemed acceptable. On a modern phone, this actually results in faster speeds than we saw on iOS.
In conclusion, I have failed to talk about other potentially useful pieces of this whole puzzle, including saving animated GIFs to the Android image gallery, saving animated GIFs to the iOS PhotoLibrary, getting WebRTC connections to persist across Android screen rotations, and the whole thing where we actually got the Android app and the iOS app to connect to each other.
For reference, a lot of the original research and experimentation was carried out with Pierre Chabardes' AndroidRTC project and Gregg Ganley's implementation of Google's AppRTC demo. We used the most recent versions of libjingle_peerconnection at the time of development (Android (Maven) 11139, iOS (Cocoapods) 11177.2.0), which are not actually the most recent WebRTC sources.
Originally, we discussed going the whole nine yards: having a running video buffer with maybe the last minute or so of video, including sound, that we could save off to a .h264 MP4 video or some such. WebRTC streams video and audio in two separate streams, and the SDKs for Android and iOS don't easily expose the audio stream.
For the sake of development time, we decided to restrict our video recording to simple animated GIFs. Even though this was a vast simplification, it still proved to be a large development headache, especially on Android. On iOS, at least, StackOverflow has some pretty straightforward answers, like this one from rob mayoff. It was just a matter of getting things threaded and then we were off and running.
Actually, before I get to the GIF encoding, let me take a step back. Where are the frames we're going to use coming from? On both platforms, WebRTC has a public interface that feeds a series of custom I420Frame objects from the backend streaming to the frontend rendering. The I420Frames are really just YUV images. Documentation is light, but we were able to dig through the WebRTC source, at least. For Android, we have the VideoRenderer, which contains both the I420Frame class definition and the VideoRenderer.Callbacks interface, which is what actually gets handed a frame. On the iOS side, we have the RTCVideoRenderer, which has a renderFrame method that can be overridden to get at the I420Frame (in this case called RTCVideoFrame). More generally, the UIView you would actually use is a RTCEAGLVideoView, which can just be inherited, and you can grab the I420Frame when renderFrame is called.
Android is, again, trickier. When you receive a new remote video stream from WebRTC, you need to have a VideoRenderer.Callbacks implementation wrapped in a VideoRenderer object that you apply to the stream. The Android SDK provides a helper class (org.webrtc.VideoRendererGui) with static methods to create VideoRenderer.Callbacks implementations that can draw to a GLSurfaceView. However, this implementation doesn't really play nice with inheritance like things do on iOS. Fortunately, you can add multiple renderers to a video stream. So we created our own implementation of VideoRenderer.Callbacks, wrapped it in a VideoRenderer, and added and removed it from the remote video stream as needed. Now renderFrame would be called on it, and we had access to the I420Frame. NOTE: We discovered we had to call VideoRenderer.renderFrameDone() at the end of renderFrame to clean things up. The WebRTC SDK creates a separate I420Frame object for each video renderer, and each is responsible for its own cleanup. Otherwise, you'll end up with a mysterious memory leak.
So all of that is done, and now we're getting I420Frame objects as they're sent over the remote video stream, which we can copy to a local streaming buffer, data store, or whatever you like for later. But again, these are YUV images, not typical RGB, which means they need to be converted before they can actually be encoded using any sort of standard GIF library. On iOS, this is comparatively easy. Google developed a YUV converter that lives in the WebRTC library, and we can just use that. We grabbed the header files, and then we could just use the various functions to copy frames (libyuv:I420Copy) and convert to RGB (libyuv:I420ToABGR). Note the swapped order of ABGR. iOS image generation expects RGBA, but empirical testing showed that the endian-ness was swapped, and converting with ABGR on the WebRTC side resulted in correctly ordered bytes when fed to iOS libraries. StackOverflow again has answers for getting a usable UIImage out of a byte array, such as this one by Ilanchezhian and Jhaliya.
As is a running theme here, Android was not so easy. Technically, it has the same YUV converter buried in the native library, but we're operating in Java, and things are not easily exposed at that level. It turned out to be way easier to write a YUV converter class than try to get at the internal conversion utility. Starting from this StackOverflow answer by rics, we created YuvFrame.java, which we've posted here. (Edit 2/2020: when we upgraded our project to use Google's WebRTC library, we had to make a different YuvFrame.java that's compatible with the library. Also, here's an Objective-C version, I420Frame.)
Finally, we're at the point of actually saving the collection of WebRTC video frames to an animated GIF. I discussed the iOS method earlier. I also leave it as an exercise to the reader to record the variable framerate of the video stream and apply the frame timing reasonably to the animated GIF. The main discussion is once again Android.
We started out with a Java-based GIF encoder with high color accuracy. This got the job done well, but it had a drawback: on somewhat older devices, like the Nexus 5, encoding 2 seconds of video at 10fps with 480x480px frames (20 of them) could take upwards of 3 minutes to complete (though to be fair, with lots of background processes closed and a fresh boot, it could be down to 1 minute 15 seconds). Either way, this was unacceptable. All our tests on iOS, even with an older iPhone 5, showed much better quality encoding in 10-15 seconds. Step one was the increase the thread priority, since we were using an AsyncTask, which defaults to background thread priority and takes up maybe 10% of the CPU. Bumping this up to normal and even high priority got us around a 40% speed increase. That's a lot, and given that the majority of phones have multiple CPU cores, it didn't affect the video stream performance. However, our actual target was a 6 second animated GIF at 15fps, which means 90 frames to encode. The next step was to dig up an NDK-based GIF encoder. This got us a further speed increase, and we were looking at just over a minute for the full 90 frame encode.
I instrumented the whole encoding process, and there were two major time sinks: creating a color palette for each frame, and converting the frame to the color palette. The former was maybe 20% of the frame encode time, while the latter was 70%-75%. I played around a bit with global color palettes and only generating a new color palette every few frames. The former resulted in a pretty bad quality reduction in certain cases, but when I generated the color palette once every 5 frames and stored it for the intervening frames, I got a decent amount of speed back without a serious drop in quality. Still, this was only affecting the operation that comprised a bit of the total frame encoding time. Actually going through all the pixels of each frame and finding the best match in the color palette was the most intensive part.
I can't say I came up with the idea (that credit belongs to Bill), but I did implement our final solution. We multi-threaded the process of palettizing the frames. We checked the device to see how many CPU cores it had (combining StackOverflow answers from both David and DanKodi), then set the encoding thread count to one less than that (so the video stream keeps running). We segmented the frame by rows into however many threads we had to work with, and proceeded to palettize each segment concurrently. Now you may be asking, what about dithering? Well, strictly speaking, this method results in a slightly lower quality frame because we can't do dithering quite the same way. We dithered each segment as normal, and for the later segments, we used the (un-dithered) row from the previous segment as a basis. On its own, this would result in artifacts along the lines between segments. So after all the threads were done, we did one more custom dithering pass along the boundary lines between segments to use the final dithered values from the previous segment to update the first row in the next segment. This pretty much smoothed out all the noticeable artifacts.
We forked Wayne Jo's android-ndk-gif project with this new encoding method. This got us yet another 40% increase in encoding speed, bringing us under 40 seconds on average to encode 90 frames on an old Nexus 5, which we deemed acceptable. On a modern phone, this actually results in faster speeds than we saw on iOS.
In conclusion, I have failed to talk about other potentially useful pieces of this whole puzzle, including saving animated GIFs to the Android image gallery, saving animated GIFs to the iOS PhotoLibrary, getting WebRTC connections to persist across Android screen rotations, and the whole thing where we actually got the Android app and the iOS app to connect to each other.
Tuesday, March 7, 2017
Bulk Updating Google Play IAPs with a CSV File
I just spent way too long getting a CSV file formatted in such a way that Google Play's bulk in-app-purchase import would be happy about it. In theory this is easy, but Google Play's example formatting is not so great. Since I couldn't find a nice simple example template, I figured I'd provide one here.
Here's an XLS file, and here's a CSV. I exported as UTF-8, with all text fields quoted, commas separating.
Also, don't forget: Descriptions can't be longer than 80 characters!
Here's an XLS file, and here's a CSV. I exported as UTF-8, with all text fields quoted, commas separating.
Also, don't forget: Descriptions can't be longer than 80 characters!
Subscribe to:
Posts (Atom)