Note that the embeddings focus more on the type of movement than the action class. In the subsequent rows, the results are retrieved from a database of video clips spanning the entire UCF101 training set. In the top row, the retrieval is done from clips taken from further along in the same video. The left column shows a query clip, and the right three columns show the clips with the closest embeddings, from left (closest) to right (3rd-closest). Note that each pair of constant and adaptive sped-up videos have the exact same duration (length). These are the same 5 pairs of videos shown in our user study (Section 5.3, Figure 6).īelow each sped-up video we show its corresponding speedup curve.įor the video on the right, the plot shows the adaptive speedup score over time as explained in section 4.2. Speedup is not Motion Magnitude (Section 5.1.2 and Figure 2)įor each video, we compare constant (uniform) 2x speedup (left) with our adaptive speedup result (right) as detailed in Section 5.3.Predicting Normal Speed and Slow Motion Segments (Section 5.2 and Figure 1).Additional SpeedNet Prediction Results (Section 5.1.1).Spatially-varying Speediness (Section 5.5 and Figure 9). Visualization of Salient Space-time Regions (Section 5.5 and Figure 8).Video Retrieval Results (Section 5.4.2).SpeedNet: Learning the Speediness in Videos Supplementary Material
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |