(2.1) Video Magnification
Motion Magnification
Motion in the visual medium can be represented in terms of amplitude, temporal and spatial variations. The subtle motion that cannot be seen by human eyes can reveal details like deformations, external force, slight motion in systems, equilibrium etc. These minute details are revealed using motion magnification. It magnifies the motion associated with the area/pixel/object of interest. Large motions remain unaltered and only the subtle changes are magnified. Motion estimation is carried out by a grouping process where a cluster of pixel based on its motion characteristics is studied and amplified. Here the affinity of the group on their trajectories is measured over time. The holes in the motion enhanced is resolved using texture synthesis. Even though the estimation in realis the motion estimation of each pixel, the analysis and estimation is mainly done on feature points to get accurate motion trajectories with respect to time.
Literature Survey
Spatio-temporal variations that cannot be detected by human eyes have to be selectively magnified to extract information that can help doctors diagnose the functioning of the heart of a patient. This can be extracted by magnifying these subtle variations. Mainly two approaches can be considered for the same namely Eulerian and Lagrangian perspectives. Motion magnification techniques proposed by Ce Liu et al. [1] helps magnify finite changes in a video sequence. It is based on the Lagrangian perspective for the estimation of motion. Wang et al. [2]achieved motion exaggeration/enhancement that is perpetually appealing, using cartoon animation filters. This method tracks the particle trajectories over time for motion estimation. One of the major drawbacks is that this algorithm fails at regions of complicated motion and the occlusion boundaries. It is also a very computationally intensive method.
Hao Yu. Wu. et al. [3]used temporal and spatial processing to amplify the key aspects in a video to reveal significant information and this algorithm was
referred to as Eulerian Video Magnification. It has the capability to magnify both motion and colour variations. Temporal colour changes are tracked at pix-els or area of interest, using differential approximation [4]. Imperceptible signal extractions were done using temporal processing by Poh et al. [5]. Fuchs et al.[6] has demonstrated that temporal processing can also be used for smoothing videos. This is achieved by developing the temporal filters and in order to nullify the effects of large motions, high pass filtering is used.
[1] Ce Liu, Antonio Torralba, William T. Freeman, Frédo Durand, Edward H.
Adelson “Motion magnification”, ACM transactions on graphics(TOG), vol.
24, no.3, 2005, pp. 519-526.
[2] Wang, J., Drucker, S. M., Agrawala, M., AND Ccohen, M. F., “The car-
toon animation filter”, ACM Transactions on Graphics, vol. 25, 2006., pp.
1169–1173.
[3] Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand,
WilliamFreeman, “Eulerianvideomagnificationforrevealingsubtlechanges
in the world”, ACM Transactions on Graphics, vol. 31, no. 4, pp. 65, 2012.
[4] Horn, B., and Schunck, B, “Determining optical flow”, Artificial intelli-
gence 17, 1-3, 1981, pp. 185–203.
[5] Poh, M.-Z., Mcduff, D. J., and Picard, R. W, “Non-contact, automated car-
diac pulse measurements using video imaging and blind source separation”.
Opt. Express , vol. 18, no. 10, 2010, pp. 10762–10774.
[6] Fuchs, M., Chen, T., Wang, O., Raskar, R., Seidel, H.P., and Lensch, H.
P, “Real-time temporal shaping of high-speed video streams”, Computers
Graphics vol. 34, no. 5, , 2010, pp. 575–584.