Title: | Volleyball Video Utilities |
---|---|
Description: | Playlists and other video support functions from volleyball match files. |
Authors: | Ben Raymond [aut, cre], Adrien Ickowicz [aut], openvolley.org [org] |
Maintainer: | Ben Raymond <[email protected]> |
License: | MIT + file LICENSE |
Version: | 1.0.0 |
Built: | 2024-11-10 04:00:45 UTC |
Source: | https://github.com/openvolley/ovideo |
RStudio overrides the default behaviour of browseURL on some platforms, meaning that local files are not opened as file:///...
URLs but as http://localhost...
. This can break some local HTML files that are expecting to be served as file:///
URLs.
browseFile(url, browser = getOption("browser"), encodeIfNeeded = FALSE)
browseFile(url, browser = getOption("browser"), encodeIfNeeded = FALSE)
url |
string: as for |
browser |
string: as for |
encodeIfNeeded |
logical: as for |
myfile <- tempfile(fileext = ".html") cat("<h1>Hello!</h1>", file = myfile) ## in RStudio on Linux, this will be opened as a http://localhost URL if (interactive()) browseURL(myfile) ## but this shouldn't browseFile(myfile)
myfile <- tempfile(fileext = ".html") cat("<h1>Hello!</h1>", file = myfile) ## in RStudio on Linux, this will be opened as a http://localhost URL if (interactive()) browseURL(myfile) ## but this shouldn't browseFile(myfile)
Get or set the video metadata in a datavolley object
dv_meta_video(x) dv_meta_video(x) <- value
dv_meta_video(x) dv_meta_video(x) <- value
x |
datavolley: a datavolley object as returned by |
value |
string or data.frame: a string containing the path to the video file, or a data.frame with columns "camera" and "file" |
For dv_meta_video
, the existing video metadata. For dv_meta_video<-
, the video metadata value in x
is changed
x <- dv_read(dv_example_file()) dv_meta_video(x) ## empty dataframe dv_meta_video(x) <- "/path/to/my/videofile" dv_meta_video(x)
x <- dv_read(dv_example_file()) dv_meta_video(x) ## empty dataframe dv_meta_video(x) <- "/path/to/my/videofile" dv_meta_video(x)
3D position estimate from multiple 2D views
ov_3dpos_multicamera(uv, C, method = "dlt", zinit = 2)
ov_3dpos_multicamera(uv, C, method = "dlt", zinit = 2)
uv |
matrix or data.frame: u, v positions in 2D images, one row per image (u and v are the image x- and y-coordinates, normalized to the range 0-1) |
C |
list: a list of the same length as the number of rows in |
method |
string: either "dlt" (direct linear transform) or "nls" (nonlinear least-squares). The "nls" method finds the real-world x and y coordinates for each point in |
zinit |
numeric: initial estimate of height (only for |
A named list with components xyz
(the estimated 3D position) and err
(a measure of uncertainty in that position estimate - currently only for method "nls")
For general background see e.g. Ballard DH, Brown CM (1982) Computer Vision. Prentice-Hall, New Jersey
## two camera matrices refpts1 <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.0533, 0.0326, 3.5, 6.5, 0, 0.974, 0.0572, 0.5, 6.5, 0, 0.683, 0.566, 0.5, 0.5, 0, 0.283, 0.560, 3.5, 0.5, 0, 0.214, 0.401, 3.5, 3.5, 0, 0.776, 0.412, 0.5, 3.5, 0, 0.780, 0.680, 0.5, 3.5, 2.43, 0.206, 0.670, 3.5, 3.5, 2.43) C1 <- ov_cmat_estimate(x = refpts1[, c("image_x", "image_y")], X = refpts1[, c("court_x", "court_y", "z")]) refpts2 <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.045, 0.0978, 0.5, 0.5, 0, 0.963, 0.0920, 3.5, 0.5, 0, 0.753, 0.617, 3.5, 6.5, 0, 0.352, 0.609, 0.5, 6.5, 0, 0.255, 0.450, 0.5, 3.5, 0, 0.817, 0.456, 3.5, 3.5, 0, 0.821, 0.731, 3.5, 3.5, 2.43, 0.246, 0.720, 0.5, 3.5, 2.43) C2 <- ov_cmat_estimate(x = refpts2[, c("image_x", "image_y")], X = refpts2[, c("court_x", "court_y", "z")]) # uv1 <- ov_cmat_apply(C1, matrix(xyz, ncol = 3))c(0.369, 0.775) ## object position in image 1 # uv2 <- c(0.732, 0.688) ## object position in image 2 xyz <- matrix(c(3.4, 1.4, 2.90), ncol = 3) uv1 <- ov_cmat_apply(C1, xyz) ## object position in image 1 uv2 <- ov_cmat_apply(C2, xyz) ## object position in image 2 ## if our measurements are perfect (no noise), we can reconstruct xyz exactly: ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "dlt") ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "nls") ## with noise uv1 <- uv1 + rnorm(2, sd = 0.02) uv2 <- uv2 + rnorm(2, sd = 0.02) ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "dlt") ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "nls")
## two camera matrices refpts1 <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.0533, 0.0326, 3.5, 6.5, 0, 0.974, 0.0572, 0.5, 6.5, 0, 0.683, 0.566, 0.5, 0.5, 0, 0.283, 0.560, 3.5, 0.5, 0, 0.214, 0.401, 3.5, 3.5, 0, 0.776, 0.412, 0.5, 3.5, 0, 0.780, 0.680, 0.5, 3.5, 2.43, 0.206, 0.670, 3.5, 3.5, 2.43) C1 <- ov_cmat_estimate(x = refpts1[, c("image_x", "image_y")], X = refpts1[, c("court_x", "court_y", "z")]) refpts2 <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.045, 0.0978, 0.5, 0.5, 0, 0.963, 0.0920, 3.5, 0.5, 0, 0.753, 0.617, 3.5, 6.5, 0, 0.352, 0.609, 0.5, 6.5, 0, 0.255, 0.450, 0.5, 3.5, 0, 0.817, 0.456, 3.5, 3.5, 0, 0.821, 0.731, 3.5, 3.5, 2.43, 0.246, 0.720, 0.5, 3.5, 2.43) C2 <- ov_cmat_estimate(x = refpts2[, c("image_x", "image_y")], X = refpts2[, c("court_x", "court_y", "z")]) # uv1 <- ov_cmat_apply(C1, matrix(xyz, ncol = 3))c(0.369, 0.775) ## object position in image 1 # uv2 <- c(0.732, 0.688) ## object position in image 2 xyz <- matrix(c(3.4, 1.4, 2.90), ncol = 3) uv1 <- ov_cmat_apply(C1, xyz) ## object position in image 1 uv2 <- ov_cmat_apply(C2, xyz) ## object position in image 2 ## if our measurements are perfect (no noise), we can reconstruct xyz exactly: ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "dlt") ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "nls") ## with noise uv1 <- uv1 + rnorm(2, sd = 0.02) uv2 <- uv2 + rnorm(2, sd = 0.02) ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "dlt") ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "nls")
The camera matrix characterizes the mapping of a camera from 3D real-world coordinates to 2D coordinates in an image.
ov_cmat_apply(C, X)
ov_cmat_apply(C, X)
C |
: camera matrix as returned by |
X |
matrix or data.frame: Nx3 matrix of 3D real-world coordinates |
An Nx2 matrix of image coordinates
https://en.wikipedia.org/wiki/Camera_matrix. For general background see e.g. Ballard DH, Brown CM (1982) Computer Vision. Prentice-Hall, New Jersey
## define real-world and corresponding image coordinates xX <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.054, 0.023, 0.5, 0.5, 0, ## near left baseline 0.951, 0.025, 3.5, 0.5, 0, ## near right baseline 0.752, 0.519, 3.5, 6.5, 0, ## far right baseline 0.288, 0.519, 0.5, 6.5, 0, ## far left baseline 0.199, 0.644, 0.5, 3.5, 2.43, ## left net top 0.208, 0.349, 0.5, 3.5, 0.00, ## left net floor 0.825, 0.644, 3.5, 3.5, 2.43, ## right net top 0.821, 0.349, 3.5, 3.5, 0.00) ## right net floor C <- ov_cmat_estimate(X = xX[, 3:5], x = xX[, 1:2]) ## fitted image coordinates using C ov_cmat_apply(C, X = xX[, 3:5]) ## compare to actual image positions xX[, 1:2]
## define real-world and corresponding image coordinates xX <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.054, 0.023, 0.5, 0.5, 0, ## near left baseline 0.951, 0.025, 3.5, 0.5, 0, ## near right baseline 0.752, 0.519, 3.5, 6.5, 0, ## far right baseline 0.288, 0.519, 0.5, 6.5, 0, ## far left baseline 0.199, 0.644, 0.5, 3.5, 2.43, ## left net top 0.208, 0.349, 0.5, 3.5, 0.00, ## left net floor 0.825, 0.644, 3.5, 3.5, 2.43, ## right net top 0.821, 0.349, 3.5, 3.5, 0.00) ## right net floor C <- ov_cmat_estimate(X = xX[, 3:5], x = xX[, 1:2]) ## fitted image coordinates using C ov_cmat_apply(C, X = xX[, 3:5]) ## compare to actual image positions xX[, 1:2]
The camera matrix characterizes the mapping of a camera from 3D real-world coordinates to 2D coordinates in an image.
ov_cmat_estimate(X, x)
ov_cmat_estimate(X, x)
X |
matrix or data.frame: Nx3 matrix of 3D real-world coordinates |
x |
matrix or data.frame: Nx2 matrix of image coordinates |
A list with components coef
(fitted transformation coefficients) and rmse
(root mean squared error of the fitted transformation)
https://en.wikipedia.org/wiki/Camera_matrix. For general background see e.g. Ballard DH, Brown CM (1982) Computer Vision. Prentice-Hall, New Jersey
## define real-world and corresponding image coordinates xX <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.054, 0.023, 0.5, 0.5, 0, ## near left baseline 0.951, 0.025, 3.5, 0.5, 0, ## near right baseline 0.752, 0.519, 3.5, 6.5, 0, ## far right baseline 0.288, 0.519, 0.5, 6.5, 0, ## far left baseline 0.199, 0.644, 0.5, 3.5, 2.43, ## left net top 0.208, 0.349, 0.5, 3.5, 0.00, ## left net floor 0.825, 0.644, 3.5, 3.5, 2.43, ## right net top 0.821, 0.349, 3.5, 3.5, 0.00) ## right net floor C <- ov_cmat_estimate(X = xX[, 3:5], x = xX[, 1:2]) ## fitted image coordinates using C ov_cmat_apply(C, X = xX[, 3:5]) ## compare to actual image positions xX[, 1:2]
## define real-world and corresponding image coordinates xX <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z, 0.054, 0.023, 0.5, 0.5, 0, ## near left baseline 0.951, 0.025, 3.5, 0.5, 0, ## near right baseline 0.752, 0.519, 3.5, 6.5, 0, ## far right baseline 0.288, 0.519, 0.5, 6.5, 0, ## far left baseline 0.199, 0.644, 0.5, 3.5, 2.43, ## left net top 0.208, 0.349, 0.5, 3.5, 0.00, ## left net floor 0.825, 0.644, 3.5, 3.5, 2.43, ## right net top 0.821, 0.349, 3.5, 3.5, 0.00) ## right net floor C <- ov_cmat_estimate(X = xX[, 3:5], x = xX[, 1:2]) ## fitted image coordinates using C ov_cmat_apply(C, X = xX[, 3:5]) ## compare to actual image positions xX[, 1:2]
Note that in order to use ov_editry_clips
, the editry
package must be installed. Install it with: remotes::install_github('scienceuntangled/editry')
or install.packages('editry', repos = c('https://openvolley.r-universe.dev', 'https://cloud.r-project.org'))
. The editry
package also requires editly
(the underlying node JS package: see editry::er_install_editly()
).
ov_editry_clips( playlist, title = NULL, title2 = NULL, label_col, pause = TRUE, seamless = FALSE, title_args = list(), title2_args = list(), pause_args = list(), label_args = list() )
ov_editry_clips( playlist, title = NULL, title2 = NULL, label_col, pause = TRUE, seamless = FALSE, title_args = list(), title2_args = list(), pause_args = list(), label_args = list() )
playlist |
data.frame: a playlist as returned by |
title |
string: the title text (first slide). Use |
title2 |
string: the second title text (on the second slide). Use |
label_col |
string: the name of the column in |
pause |
logical: if |
seamless |
logical: if |
title_args |
list: arguments to pass to |
title2_args |
list: arguments to pass to |
pause_args |
list: arguments to pass to |
label_args |
list: arguments to pass to |
A list of editry::er_clip()
objects, suitable to pass to editry::er_spec()
editry::er_layer_news_title()
, editry::er_layer()
, editry::er_spec()
## Not run: ## Example 1 ## Step 1: create our playlist ## use data from the ovdata package library(ovdata) ## install via remotes::install_github("openvolley/ovdata") if needed x <- ovdata_example("190301_kats_beds-clip", as = "parsed") ## make sure its video element points to our local copy of the corresponding video clip dv_meta_video(x) <- ovdata_example_video("190301_kats_beds") ## extract the plays px <- datavolley::plays(x) ## use just the attack rows px <- px[which(px$skill == "Attack"), ] ## make a new column with player name and attack type px$label <- paste(px$player_name, px$attack_code, "attack") ## make the playlist with the new label column included tm <- ov_video_timing(Attack = c(-3, 2)) ## tighter than normal timing ply <- ov_video_playlist(px, x$meta, extra_cols = "label", timing = tm) ## Step 2: convert to editly clip objects and compile to mp4 library(editry) ## create the clips, one for each row of the playlist clips <- ov_editry_clips(ply, title = "GKS Katowice\nvs\nMKS Bedzin", title2 = "Attacks", label_col = "label") ## compile to video outfile <- tempfile(fileext = ".mp4") my_spec <- er_spec(out_path = outfile, clips = clips) er_exec_wait(spec = my_spec, fast = TRUE) ## and view the output if (interactive()) browseURL(outfile) ## --- ## Example 2 ## without a playlist, make a simple clip from a known segment of video library(editry) library(ovdata) ## install via remotes::install_github("openvolley/ovdata") if needed my_video <- ovdata_example_video("190301_kats_beds") ## path to your video file my_logo <- "https://github.com/openvolley/community/raw/master/docs/talks/common/ovlogo-blur.png" clips <- list(er_clip_video(path = my_video, cut_from = 1, cut_to = 8), ## video segment ## add an outro banner with logo er_clip(duration = 1.5, layers = list(er_layer_fill_color(), er_layer_image(path = my_logo))), ## and blank finishing screen er_clip_pause(duration = 0.25)) outfile <- tempfile(fileext = ".mp4") my_spec <- er_spec(clips = clips, out_path = outfile, allow_remote_requests = TRUE) er_exec_wait(spec = my_spec, fast = TRUE) if (interactive()) browseURL(outfile) ## End(Not run)
## Not run: ## Example 1 ## Step 1: create our playlist ## use data from the ovdata package library(ovdata) ## install via remotes::install_github("openvolley/ovdata") if needed x <- ovdata_example("190301_kats_beds-clip", as = "parsed") ## make sure its video element points to our local copy of the corresponding video clip dv_meta_video(x) <- ovdata_example_video("190301_kats_beds") ## extract the plays px <- datavolley::plays(x) ## use just the attack rows px <- px[which(px$skill == "Attack"), ] ## make a new column with player name and attack type px$label <- paste(px$player_name, px$attack_code, "attack") ## make the playlist with the new label column included tm <- ov_video_timing(Attack = c(-3, 2)) ## tighter than normal timing ply <- ov_video_playlist(px, x$meta, extra_cols = "label", timing = tm) ## Step 2: convert to editly clip objects and compile to mp4 library(editry) ## create the clips, one for each row of the playlist clips <- ov_editry_clips(ply, title = "GKS Katowice\nvs\nMKS Bedzin", title2 = "Attacks", label_col = "label") ## compile to video outfile <- tempfile(fileext = ".mp4") my_spec <- er_spec(out_path = outfile, clips = clips) er_exec_wait(spec = my_spec, fast = TRUE) ## and view the output if (interactive()) browseURL(outfile) ## --- ## Example 2 ## without a playlist, make a simple clip from a known segment of video library(editry) library(ovdata) ## install via remotes::install_github("openvolley/ovdata") if needed my_video <- ovdata_example_video("190301_kats_beds") ## path to your video file my_logo <- "https://github.com/openvolley/community/raw/master/docs/talks/common/ovlogo-blur.png" clips <- list(er_clip_video(path = my_video, cut_from = 1, cut_to = 8), ## video segment ## add an outro banner with logo er_clip(duration = 1.5, layers = list(er_layer_fill_color(), er_layer_image(path = my_logo))), ## and blank finishing screen er_clip_pause(duration = 0.25)) outfile <- tempfile(fileext = ".mp4") my_spec <- er_spec(clips = clips, out_path = outfile, allow_remote_requests = TRUE) er_exec_wait(spec = my_spec, fast = TRUE) if (interactive()) browseURL(outfile) ## End(Not run)
Example video clips provided as part of the ovideo package
ov_example_video(choice = 1)
ov_example_video(choice = 1)
choice |
integer: which video file to return?
|
Path to the video file
Helper functions to find the ffmpeg executable. If ffmpeg is not installed on the system, it can be installed (for some platforms) with ov_install_ffmpeg()
.
ov_ffmpeg_exe() ov_ffmpeg_ok(do_error = FALSE)
ov_ffmpeg_exe() ov_ffmpeg_ok(do_error = FALSE)
do_error |
logical: if |
For ov_ffmpeg_exe
, the path to the executable, or NULL
if not found. For ov_ffmpeg_ok
, a logical indicating whether the executable could be found or not
ov_ffmpeg_ok()
ov_ffmpeg_ok()
Try and locate a video file, when the path embedded in the dvw file is for another computer
ov_find_video_file(dvw_filename, video_filename = NULL)
ov_find_video_file(dvw_filename, video_filename = NULL)
dvw_filename |
string: the full path to the DataVolley file |
video_filename |
character: one or more video file paths. If |
A character vector, with one entry per video_filename
. Video files that could not be found will be NA
here.
This function is used to define the reference points on a court image, to be used with ov_transform_points()
.
The court coordinate system is that used in datavolley::dv_court()
, datavolley::ggcourt()
, and related functions.
Try plot(c(0, 4), c(0, 7), type = "n", asp = 1); datavolley::dv_court()
or ggplot2::ggplot() + datavolley::ggcourt() + ggplot2::theme_bw()
for a visual depiction.
ov_get_court_ref(image_file, video_file, t = 60, type = "corners")
ov_get_court_ref(image_file, video_file, t = 60, type = "corners")
image_file |
string: path to an image file (jpg) containing the court image (not required if |
video_file |
string: path to a video file from which to extract the court image (not required if |
t |
numeric: the time of the video frame to use as the court image (not required if |
type |
string: currently only "corners" |
A data.frame containing the reference information
ov_transform_points()
, datavolley::dv_court()
, datavolley::ggcourt()
if (interactive()) { crt <- ov_get_court_ref(image_file = system.file("extdata/2019_03_01-KATS-BEDS-court.jpg", package = "ovideo")) }
if (interactive()) { crt <- ov_get_court_ref(image_file = system.file("extdata/2019_03_01-KATS-BEDS-court.jpg", package = "ovideo")) }
Retrieve a data object stored in a video file metadata tag
ov_get_video_data(video_file, tag = "ov_court_info", b64 = TRUE)
ov_get_video_data(video_file, tag = "ov_court_info", b64 = TRUE)
video_file |
string: path to the video file |
tag |
string: the tag name to use |
b64 |
logical: was |
The stored information, or NULL
if there was none
## Not run: if (interactive()) { ## mark the geometry of the court in the video ref <- ov_shiny_court_ref(video_file = ov_example_video(), t = 5) ## store it newfile <- ov_set_video_data(ov_example_video(), obj = ref) ## retrieve it ov_get_video_data(newfile) } ## End(Not run)
## Not run: if (interactive()) { ## mark the geometry of the court in the video ref <- ov_shiny_court_ref(video_file = ov_example_video(), t = 5) ## store it newfile <- ov_set_video_data(ov_example_video(), obj = ref) ## retrieve it ov_get_video_data(newfile) } ## End(Not run)
Requires that ffmpeg is available on your system path.
ov_get_video_meta(video_file, debug = FALSE)
ov_get_video_meta(video_file, debug = FALSE)
video_file |
string: path to the video file |
debug |
logical: if |
A named list of metadata values
## Not run: newfile <- ov_set_video_meta(ov_example_video(), comment = "A comment") ov_get_video_meta(newfile) ## End(Not run)
## Not run: newfile <- ov_set_video_meta(ov_example_video(), comment = "A comment") ov_get_video_meta(newfile) ## End(Not run)
Requires that ffmpeg is available on your system path. Input files can either be specified as a list of image files, or alternatively as a directory name and image file mask. For the latter, the images must be numbered in sequential order.
ov_images_to_video( input_dir, image_file_mask = "image_%06d.jpg", image_files, outfile, fps = 30, extra = NULL, debug = FALSE )
ov_images_to_video( input_dir, image_file_mask = "image_%06d.jpg", image_files, outfile, fps = 30, extra = NULL, debug = FALSE )
input_dir |
string: path to the input directory |
image_file_mask |
string: the mask that specifies the image files, e.g. "image_%06d.jpg" for images named "image_000001.jpg", "image_000002.jpg" etc |
image_files |
character: vector of input image files, in order that they should appear in the video. Used if |
outfile |
string: the output file. If missing, a temporary file (with extension .mp4) will be used |
fps |
numeric: frames per second |
extra |
: additional parameters passed to ffmpeg, in the form c("param", "value", "param2", "value2"). For example, |
debug |
logical: if |
The path to the video file
av::av_encode_video()
as an alternative
This is a helper function to install ffmpeg. Currently it only works on Windows and Linux platforms. The ffmpeg bundle will be downloaded from https://github.com/BtbN/FFmpeg-Builds/releases/latest (Windows) or https://johnvansickle.com/ffmpeg/ (Linux) and saved to your user appdata directory.
ov_install_ffmpeg(force = FALSE, bits, check_hash = TRUE)
ov_install_ffmpeg(force = FALSE, bits, check_hash = TRUE)
force |
logical: force reinstallation if ffmpeg already exists |
bits |
integer: 32 or 64, for 32- or 64-bit install. If missing or |
check_hash |
logical: don't check the hash of the downloaded file. Ignored on windows |
the path to the installed executable
https://github.com/BtbN/FFmpeg-Builds/releases/latest https://johnvansickle.com/ffmpeg/
## Not run: ov_install_ffmpeg() ## End(Not run)
## Not run: ov_install_ffmpeg() ## End(Not run)
Merge two video timing dataframes
ov_merge_video_timing_df(x, default = ov_video_timing_df())
ov_merge_video_timing_df(x, default = ov_video_timing_df())
x |
data.frame: video timings to use |
default |
data.frame: default timings to use, for anything not provided in |
A data.frame
my_timings <- data.frame(skill = "Attack", phase = "Reception", start_offset = 0) ov_merge_video_timing_df(my_timings)
my_timings <- data.frame(skill = "Attack", phase = "Reception", start_offset = 0) ov_merge_video_timing_df(my_timings)
Generate data suitable for creating a court overlay plot
ov_overlay_data( zones = TRUE, serve_zones = TRUE, labels = FALSE, space = "court", court_ref, crop = TRUE )
ov_overlay_data( zones = TRUE, serve_zones = TRUE, labels = FALSE, space = "court", court_ref, crop = TRUE )
zones |
logical: if |
serve_zones |
logical: if |
labels |
logical: if |
space |
string: if "court", the data will be in court coordinates. If "image", the data will be transformed to image coordinates via |
court_ref |
data.frame: as returned by |
crop |
logical: if |
A list of data.frames
Generate a court overlay image showing court boundary, 3m, zone, and other lines
ov_overlay_image(court_ref, height, width, filename, ...)
ov_overlay_image(court_ref, height, width, filename, ...)
court_ref |
data.frame: as returned by |
height |
integer: height of image to produce in pixels |
width |
integer: width of image to produce in pixels |
filename |
string: image filename (png). If missing, a file will be created in the temporary directory |
... |
: arguments passed to |
The path to the generated file.
Convert playlist to 'onclick' string
ov_playlist_as_onclick( playlist, video_id, normalize_paths = TRUE, dvjs_fun = "dvjs_set_playlist_and_play", seamless = TRUE, loop = FALSE, controller_var )
ov_playlist_as_onclick( playlist, video_id, normalize_paths = TRUE, dvjs_fun = "dvjs_set_playlist_and_play", seamless = TRUE, loop = FALSE, controller_var )
playlist |
data.frame: a playlist as returned by |
video_id |
string: the id of the HTML video element to attach the playlist to |
normalize_paths |
logical: if |
dvjs_fun |
string: the javascript function to use |
seamless |
logical: if clips overlap, should we transition seamlessly from one to the next? |
loop |
logical: should we loop endlessly over the playlist? |
controller_var |
string: (for version 2 only) the js variable name of the controller object to assign this playlist to |
A string suitable for inclusion as an 'onclick' tag attribute
## Not run: library(shiny) ## hand-crafted playlist for this example playlist <- data.frame(video_src = "NisDpPFPQwU", start_time = c(624, 3373, 4320), duration = 8, type = "youtube") shinyApp( ui = fluidPage( ov_video_js(youtube = TRUE), ov_video_player(id = "yt_player", type = "youtube", style = "height: 480px; background-color: black;"), tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player")) ), server = function(input, output) {}, ) ## or using v2, which supports multiple video elements in a page shinyApp( ui = fluidPage( ov_video_js(youtube = TRUE, version = 2), ## first player ov_video_player(id = "yt_player", type = "youtube", style = "height: 480px; background-color: black;", version = 2, controller_var = "my_dv"), tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player", controller_var = "my_dv")), ## second player ov_video_player(id = "yt_player2", type = "youtube", style = "height: 480px; background-color: black;", version = 2, controller_var = "my_dv2"), tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player2", controller_var = "my_dv2")) ), server = function(input, output) {}, ) ## End(Not run)
## Not run: library(shiny) ## hand-crafted playlist for this example playlist <- data.frame(video_src = "NisDpPFPQwU", start_time = c(624, 3373, 4320), duration = 8, type = "youtube") shinyApp( ui = fluidPage( ov_video_js(youtube = TRUE), ov_video_player(id = "yt_player", type = "youtube", style = "height: 480px; background-color: black;"), tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player")) ), server = function(input, output) {}, ) ## or using v2, which supports multiple video elements in a page shinyApp( ui = fluidPage( ov_video_js(youtube = TRUE, version = 2), ## first player ov_video_player(id = "yt_player", type = "youtube", style = "height: 480px; background-color: black;", version = 2, controller_var = "my_dv"), tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player", controller_var = "my_dv")), ## second player ov_video_player(id = "yt_player2", type = "youtube", style = "height: 480px; background-color: black;", version = 2, controller_var = "my_dv2"), tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player2", controller_var = "my_dv2")) ), server = function(input, output) {}, ) ## End(Not run)
Converts a playlist object to an HTML file that can be opened in any browser. Note that if the playlist uses local video files, the HTML file will only work on a device that has access to those files. If the playlist uses YouTube (or other external) video URLs, the HTML file will be usable on any network-connected device.
ov_playlist_to_html( playlist, playlist_name = "Playlist", outfile, no_paths = FALSE, table_cols = c(), loop = FALSE, ... )
ov_playlist_to_html( playlist, playlist_name = "Playlist", outfile, no_paths = FALSE, table_cols = c(), loop = FALSE, ... )
playlist |
data.frame: as returned by |
playlist_name |
string: the name to use for the playlist |
outfile |
string: the file name to write to. If not supplied, a file will be created in the temporary directory. Note that the directory of |
no_paths |
logical: if |
table_cols |
character: the names of columns in |
loop |
logical: should we loop endlessly over the playlist? |
... |
: additional arguments passed to the Rmd file used to generate the HTML. Currently these are:
|
The path to the HTML file
ov_video_playlist()
ov_playlist_to_vlc()
## Not run: ## use data from the ovdata package library(ovdata) ## install via remotes::install_github("openvolley/ovdata") if needed x <- ovdata_example("190301_kats_beds-clip", as = "parsed") ## make sure its video element points to our local copy of the corresponding video clip dv_meta_video(x) <- ovdata_example_video("190301_kats_beds") ## extract the plays px <- datavolley::plays(x) ## it's a single rally, so we'll use all rows (just exclude NA skill rows) px <- px[!is.na(px$skill), ] ## define columns to show in the table extra_cols <- c("home_team", "visiting_team", "video_time", "code", "set_number", "home_team_score", "visiting_team_score") ## make the playlist with extra columns included ply <- ov_video_playlist(px, x$meta, extra_cols = c(extra_cols, "player_name")) ## use player name as the subtitle ply$subtitle <- ply$player_name ## convert to HTML f <- ov_playlist_to_html(ply, table_cols = extra_cols) ## and finally open it! browseFile(f) ## End(Not run)
## Not run: ## use data from the ovdata package library(ovdata) ## install via remotes::install_github("openvolley/ovdata") if needed x <- ovdata_example("190301_kats_beds-clip", as = "parsed") ## make sure its video element points to our local copy of the corresponding video clip dv_meta_video(x) <- ovdata_example_video("190301_kats_beds") ## extract the plays px <- datavolley::plays(x) ## it's a single rally, so we'll use all rows (just exclude NA skill rows) px <- px[!is.na(px$skill), ] ## define columns to show in the table extra_cols <- c("home_team", "visiting_team", "video_time", "code", "set_number", "home_team_score", "visiting_team_score") ## make the playlist with extra columns included ply <- ov_video_playlist(px, x$meta, extra_cols = c(extra_cols, "player_name")) ## use player name as the subtitle ply$subtitle <- ply$player_name ## convert to HTML f <- ov_playlist_to_html(ply, table_cols = extra_cols) ## and finally open it! browseFile(f) ## End(Not run)
Make a self-contained video file from a playlist.
ov_playlist_to_video( playlist, filename, subtitle_column = NULL, seamless = FALSE, debug = FALSE )
ov_playlist_to_video( playlist, filename, subtitle_column = NULL, seamless = FALSE, debug = FALSE )
playlist |
data.frame: a playlist as returned by |
filename |
string: file to write to. If not specified (or |
subtitle_column |
string: if not |
seamless |
logical: if |
debug |
logical: if |
Requires that ffmpeg be available on the system path. Note that the processing of each clip is done inside of a future_lapply
call (if the future.apply
package is installed), and so you can have this part of the processing done in parallel by setting an appropriate futures plan before calling this function.
This function is experimental. In particular it is unlikely to work well with all video formats, and especially if the playlist comprises clips from different videos with different resolution/encoding/etc.
A list with the filenames of the created video and subtitle files.
## Not run: my_playlist <- ov_video_playlist(..., type = "local") video_file <- ov_create_video(my_playlist) browseURL(video_file[[1]]) ## run in parallel, with the scouted codes as subtitles library(dplyr) library(future.apply) plan(multisession) ## note that the example file doesn't have a video associated with it, so ## this example won't actually work in practice x <- read_dv(dv_example_file()) ## fudge the video entry dv_meta_video(x) <- "~/my_video.mp4" ## make the playlist my_playlist <- ov_video_playlist( x$plays %>% dplyr::filter(skill == "Reception") %>% slice(1:10), meta = x$meta, extra_cols = "code") ## create the video and subtitles files video_file <- ov_create_video(my_playlist, subtitle_column = "code") ## End(Not run)
## Not run: my_playlist <- ov_video_playlist(..., type = "local") video_file <- ov_create_video(my_playlist) browseURL(video_file[[1]]) ## run in parallel, with the scouted codes as subtitles library(dplyr) library(future.apply) plan(multisession) ## note that the example file doesn't have a video associated with it, so ## this example won't actually work in practice x <- read_dv(dv_example_file()) ## fudge the video entry dv_meta_video(x) <- "~/my_video.mp4" ## make the playlist my_playlist <- ov_video_playlist( x$plays %>% dplyr::filter(skill == "Reception") %>% slice(1:10), meta = x$meta, extra_cols = "code") ## create the video and subtitles files video_file <- ov_create_video(my_playlist, subtitle_column = "code") ## End(Not run)
Converts a playlist object to a m3u file that can be opened with VLC. Note that this only works with local video files (not YouTube or other URLs) and the video files must be present on your local file system in order for this to work.
ov_playlist_to_vlc(playlist, outfile, no_paths = FALSE, seamless = TRUE)
ov_playlist_to_vlc(playlist, outfile, no_paths = FALSE, seamless = TRUE)
playlist |
data.frame: as returned by |
outfile |
string: the file name to write to. If not supplied, a file will be created in the temporary directory. Note that the directory of |
no_paths |
logical: if |
seamless |
logical: if |
The path to the m3u file
https://www.videolan.org/, https://wiki.videolan.org/M3U/
ov_video_playlist()
ov_playlist_to_html()
This function stores an R data object (data frame, list, etc) within a metadata tag inside a video file. This is primarily intended to store video-specific information, so that this information is carried with the video file itself. By default the ov_court_info
metadata tag is used (intended to store the geometry of the playing court in the video, see Examples).
ov_set_video_data( video_file, obj, tag = "ov_court_info", b64 = TRUE, replace = FALSE, overwrite = FALSE )
ov_set_video_data( video_file, obj, tag = "ov_court_info", b64 = TRUE, replace = FALSE, overwrite = FALSE )
video_file |
string: path to the video file |
obj |
: data object to store, typically a list as returned by |
tag |
string: the tag name to use |
b64 |
logical: serialize |
replace |
logical: if |
overwrite |
logical: if |
The path to the video file
ov_get_video_data()
, ov_set_video_meta()
## Not run: if (interactive()) { ## mark the geometry of the court in the video ref <- ov_shiny_court_ref(video_file = ov_example_video(), t = 5) ## store it newfile <- ov_set_video_data(ov_example_video(), obj = ref) ## retrieve it ov_get_video_data(newfile) } ## End(Not run)
## Not run: if (interactive()) { ## mark the geometry of the court in the video ref <- ov_shiny_court_ref(video_file = ov_example_video(), t = 5) ## store it newfile <- ov_set_video_data(ov_example_video(), obj = ref) ## retrieve it ov_get_video_data(newfile) } ## End(Not run)
Requires that ffmpeg is available on your system path.
ov_set_video_meta( video_file, ..., movflags = FALSE, overwrite = FALSE, debug = FALSE )
ov_set_video_meta( video_file, ..., movflags = FALSE, overwrite = FALSE, debug = FALSE )
video_file |
string: path to the video file |
... |
: named values to set |
movflags |
logical: if |
overwrite |
logical: if |
debug |
logical: if |
This function creates a new video file with the specified metadata added. This is always a file in the temporary directory. If overwrite = TRUE
, the original file is deleted and replaced with the new file.
Note that if movflags = FALSE
, the supported video tag names (i.e. allowable names in the ...
parameters) depend on the video file type.
The path to the new video file, which if overwrite = TRUE
will be the input file, otherwise a file in the temporary directory
## Not run: newfile <- ov_set_video_meta(ov_example_video(), comment = "A comment") ov_get_video_meta(newfile) ## End(Not run)
## Not run: newfile <- ov_set_video_meta(ov_example_video(), comment = "A comment") ov_get_video_meta(newfile) ## End(Not run)
A shiny app to define a court reference
ov_shiny_court_ref( image_file, video_file, t = 60, existing_ref = NULL, launch_browser = getOption("shiny.launch.browser", interactive()), ... )
ov_shiny_court_ref( image_file, video_file, t = 60, existing_ref = NULL, launch_browser = getOption("shiny.launch.browser", interactive()), ... )
image_file |
string: path to an image file (jpg) containing the court image (not required if |
video_file |
string: path to a video file from which to extract the court image (not required if |
t |
numeric: the time of the video frame to use as the court image (not required if |
existing_ref |
list: (optional) the output from a previous call to |
launch_browser |
logical: if |
... |
: additional parameters (currently ignored) |
A list containing the reference information
if (interactive()) { ## define a court reference from scratch ov_shiny_court_ref(video_file = ov_example_video(), t = 5) ## or modify an existing one crt <- data.frame(image_x = c(0.05397063, 0.95402573, 0.75039756, 0.28921230), image_y = c(0.02129301, 0.02294600, 0.52049712, 0.51884413), court_x = c(0.5, 3.5, 3.5, 0.5), court_y = c(0.5, 0.5, 6.5, 6.5)) ref <- list(court_ref = crt, net_height = 2.43) ov_shiny_court_ref(video_file = ov_example_video(), t = 5, existing_ref = ref) }
if (interactive()) { ## define a court reference from scratch ov_shiny_court_ref(video_file = ov_example_video(), t = 5) ## or modify an existing one crt <- data.frame(image_x = c(0.05397063, 0.95402573, 0.75039756, 0.28921230), image_y = c(0.02129301, 0.02294600, 0.52049712, 0.51884413), court_x = c(0.5, 3.5, 3.5, 0.5), court_y = c(0.5, 0.5, 6.5, 6.5)) ref <- list(court_ref = crt, net_height = 2.43) ov_shiny_court_ref(video_file = ov_example_video(), t = 5, existing_ref = ref) }
The court coordinate system is that used in datavolley::dv_court()
, datavolley::ggcourt()
, and related functions.
Try plot(c(0, 4), c(0, 7), type = "n", asp = 1); datavolley::dv_court()
or ggplot2::ggplot() + datavolley::ggcourt() + ggplot2::theme_bw()
for a visual depiction.
Image coordinates are returned as normalized coordinates in the range [0, 1]
. You may need to scale these by the width and height of the image, depending on how you are plotting things.
ov_transform_points(x, y, ref, direction = "to_court")
ov_transform_points(x, y, ref, direction = "to_court")
x |
numeric: input x points. |
y |
numeric: input y points |
ref |
data.frame: reference, as returned by |
direction |
string: either "to_court" (to transform image coordinates to court coordinates) or "to_image" (the reverse) |
A two-column data.frame with transformed values
https://en.wikipedia.org/wiki/Camera_matrix. For general background see e.g. Ballard DH, Brown CM (1982) Computer Vision. Prentice-Hall, New Jersey
ov_get_court_ref()
, datavolley::dv_court()
, datavolley::ggcourt()
## the ref data for the example image crt <- data.frame(image_x = c(0.05397063, 0.95402573, 0.75039756, 0.28921230), image_y = c(0.02129301, 0.02294600, 0.52049712, 0.51884413), court_x = c(0.5, 3.5, 3.5, 0.5), court_y = c(0.5, 0.5, 6.5, 6.5)) ## show the image img <- jpeg::readJPEG(system.file("extdata/2019_03_01-KATS-BEDS-court.jpg", package = "ovideo")) plot(c(0, 1), c(0, 1), type = "n", axes = FALSE, xlab = "", ylab = "", asp = dim(img)[1]/dim(img)[2]) rasterImage(img, 0, 0, 1, 1) ## convert the ends of the 3m lines on court to image coordinates check <- data.frame(x = c(0.5, 3.5, 0.5, 3.5), y = c(2.5, 2.5, 4.5, 4.5)) ix <- ov_transform_points(check, ref = crt, direction = "to_image") ## and finally plot onto the image points(ix$x, ix$y, pch = 21, bg = 4)
## the ref data for the example image crt <- data.frame(image_x = c(0.05397063, 0.95402573, 0.75039756, 0.28921230), image_y = c(0.02129301, 0.02294600, 0.52049712, 0.51884413), court_x = c(0.5, 3.5, 3.5, 0.5), court_y = c(0.5, 0.5, 6.5, 6.5)) ## show the image img <- jpeg::readJPEG(system.file("extdata/2019_03_01-KATS-BEDS-court.jpg", package = "ovideo")) plot(c(0, 1), c(0, 1), type = "n", axes = FALSE, xlab = "", ylab = "", asp = dim(img)[1]/dim(img)[2]) rasterImage(img, 0, 0, 1, 1) ## convert the ends of the 3m lines on court to image coordinates check <- data.frame(x = c(0.5, 3.5, 0.5, 3.5), y = c(2.5, 2.5, 4.5, 4.5)) ix <- ov_transform_points(check, ref = crt, direction = "to_image") ## and finally plot onto the image points(ix$x, ix$y, pch = 21, bg = 4)
The video element and the controls provided by this function are javascript-based, and so are probably most useful in Shiny apps.
ov_video_control(what, ...)
ov_video_control(what, ...)
what |
string: the command, currently one of:
|
... |
: parameters used by those commands. For version 2 of the video controller, |
## Not run: ov_video_control("jog", -1) ## rewind 1s ov_video_control("jog", 10) ## jump forwards 10s ov_video_control("set_playback_rate", 0.5) ## play at half speed ## End(Not run)
## Not run: ov_video_control("jog", -1) ## rewind 1s ov_video_control("jog", 10) ## jump forwards 10s ov_video_control("set_playback_rate", 0.5) ## play at half speed ## End(Not run)
Requires that ffmpeg is available on your system path.
ov_video_extract_clip( video_file, outfile, start_time, duration, end_time, extra = NULL, debug = FALSE )
ov_video_extract_clip( video_file, outfile, start_time, duration, end_time, extra = NULL, debug = FALSE )
video_file |
string: path to the input file |
outfile |
string: path to the output file. If missing, a temporary file (with extension .mp4) will be used |
start_time |
numeric: start time in seconds |
duration |
numeric: duration in seconds. If missing, will be calculated from start_time and end_time |
end_time |
numeric: end time in seconds. If missing, will be calculated from start_time and duration |
extra |
: additional parameters passed to ffmpeg, in the form c("param", "value", "param2", "value2") |
debug |
logical: if |
The path to the video clip file
Extract one or more specific frames from a video file
ov_video_frame( video_file, t, n, format = "jpg", debug = FALSE, framerate, method = "auto" )
ov_video_frame( video_file, t, n, format = "jpg", debug = FALSE, framerate, method = "auto" )
video_file |
string: path to the video file |
t |
numeric: the times of the frames to extract (in seconds) |
n |
integer: the frame numbers of the frames to extract. Ignored if |
format |
string: "jpg" or "png" |
debug |
logical: if |
framerate |
numeric: the framerate of the video. If not supplied, it will be found using [av::av_video_info] |
method |
string: the method to use, either "ffmpeg", "av", or "auto". "ffmpeg" is faster than "av" but requires that ffmpeg is available on your system path. If |
The paths to the frame image files
video_file <- ov_example_video(1) img <- ov_video_frame(video_file, t = 5) img <- ov_video_frame(video_file, n = 150)
video_file <- ov_example_video(1) img <- ov_video_frame(video_file, t = 5) img <- ov_video_frame(video_file, n = 150)
Requires that ffmpeg is available on your system path.
ov_video_frames( video_file, start_time, duration, end_time, outdir, fps, format = "jpg", jpg_quality = 1, extra = NULL, debug = FALSE, exec_fun )
ov_video_frames( video_file, start_time, duration, end_time, outdir, fps, format = "jpg", jpg_quality = 1, extra = NULL, debug = FALSE, exec_fun )
video_file |
string: path to the video file |
start_time |
numeric: start time in seconds |
duration |
numeric: duration in seconds. If missing, will be calculated from start_time and end_time |
end_time |
numeric: end time in seconds. If missing, will be calculated from start_time and duration |
outdir |
string: path to the output directory, which must exist. If missing, a temporary directory will be used |
fps |
numeric: frames per second, default is to extract all frames |
format |
string: "jpg" or "png" |
jpg_quality |
numeric: jpg quality from 1-31, lower is better (this is passed to ffmpeg as the |
extra |
: additional parameters passed to ffmpeg, in the form c("param", "value", "param2", "value2") |
debug |
logical: if |
exec_fun |
string or function: the function (or function name as a string) to use to execute the ffmpeg command. Defaults to |
If exec_fun
has not been specified, the function will wait for the ffmpeg call to complete and then return a character vector of file names, one per frame. If exec_fun
has been specified, the result of that function call will be returned immediately (because it might be a call to a background process)
Inject javascript for an HTML video player
ov_video_js(youtube = FALSE, twitch = FALSE, version = 1)
ov_video_js(youtube = FALSE, twitch = FALSE, version = 1)
youtube |
logical: set to |
twitch |
logical: set to |
version |
numeric: code version. Default = 1, experimental = 2 |
A head
tag containing script tags
Video player tag element
ov_video_player( id, type, controls = FALSE, version = 1, controller_var = paste0(id, "_controller"), with_js = FALSE, ... )
ov_video_player( id, type, controls = FALSE, version = 1, controller_var = paste0(id, "_controller"), with_js = FALSE, ... )
id |
string: the id of the tag |
type |
string: either "youtube", "twitch" (only with |
controls |
logical: if |
version |
numeric: code version. Default = 1, sort-of-experimental = 2. Version 2 supports multiple players on a single page, as well as |
controller_var |
string: (for version 2 only) the js variable name to use for the controller object that controls this video player |
with_js |
logical: if |
... |
: other attributes of the player element (passed to the player |
HTML tags. The outermost element is a div with id paste0(id, "_container")
, with the player and optionally buttons nested within it.
## Not run: library(shiny) ## hand-crafted playlist for this example playlist <- data.frame(video_src = "NisDpPFPQwU", start_time = c(589, 1036, 1163, 2731, 4594), duration = 8, type = "youtube") shinyApp( ui = fluidPage( ov_video_js(youtube = TRUE, version = 2), ov_video_player(id = "yt_player", type = "youtube", version = 2, controller_var = "my_dv", style = "height: 480px; background-color: black;", controls = tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player", controller_var = "my_dv"))) ), server = function(input, output) {}, ) ## End(Not run)
## Not run: library(shiny) ## hand-crafted playlist for this example playlist <- data.frame(video_src = "NisDpPFPQwU", start_time = c(589, 1036, 1163, 2731, 4594), duration = 8, type = "youtube") shinyApp( ui = fluidPage( ov_video_js(youtube = TRUE, version = 2), ov_video_player(id = "yt_player", type = "youtube", version = 2, controller_var = "my_dv", style = "height: 480px; background-color: black;", controls = tags$button("Go", onclick = ov_playlist_as_onclick(playlist, "yt_player", controller_var = "my_dv"))) ), server = function(input, output) {}, ) ## End(Not run)
Create video playlist
ov_video_playlist( x, meta, type = NULL, timing = ov_video_timing(), extra_cols = NULL, normalize_paths = TRUE )
ov_video_playlist( x, meta, type = NULL, timing = ov_video_timing(), extra_cols = NULL, normalize_paths = TRUE )
x |
data.frame: a datavolleyplays object. Normally this will be a selected subset of the |
meta |
list: either the |
type |
string: currently "youtube", "twitch", or "local". If |
timing |
list: the relative timing for each skill type, either a named list as returned by |
extra_cols |
character: names of additional columns from |
normalize_paths |
logical: if |
A data.frame with columns src
, start_time
, duration
, plus any extras specified in extra_cols
## read data file x <- datavolley::dv_read(datavolley::dv_example_file()) ## note that this data file has no video specified, so put a dummy value in dv_meta_video(x) <- "c:\\my_video.mp4" ## extract play-by-play data px <- datavolley::plays(x) ## and put dummy video_time values in, because those are missing too px$video_time <- sample.int(2e3, size = nrow(px)) ## find pipe (XP) attacks in transition px <- px[which(px$attack_code == "XP" & px$phase == "Transition"), ] ## create playlist ply <- ov_video_playlist(px, x$meta, timing = ov_video_timing()) ## with custom timing ply <- ov_video_playlist(px, x$meta, timing = ov_video_timing_df(data.frame(skill = "Attack", phase = "Transition", start_offset = -5, duration = 10, stringsAsFactors = FALSE)))
## read data file x <- datavolley::dv_read(datavolley::dv_example_file()) ## note that this data file has no video specified, so put a dummy value in dv_meta_video(x) <- "c:\\my_video.mp4" ## extract play-by-play data px <- datavolley::plays(x) ## and put dummy video_time values in, because those are missing too px$video_time <- sample.int(2e3, size = nrow(px)) ## find pipe (XP) attacks in transition px <- px[which(px$attack_code == "XP" & px$phase == "Transition"), ] ## create playlist ply <- ov_video_playlist(px, x$meta, timing = ov_video_timing()) ## with custom timing ply <- ov_video_playlist(px, x$meta, timing = ov_video_timing_df(data.frame(skill = "Attack", phase = "Transition", start_offset = -5, duration = 10, stringsAsFactors = FALSE)))
Create video playlist per point_id
ov_video_playlist_pid( x, meta, type = NULL, extra_cols = NULL, normalize_paths = TRUE )
ov_video_playlist_pid( x, meta, type = NULL, extra_cols = NULL, normalize_paths = TRUE )
x |
data.frame: a datavolleyplays object. Normally this will be a selected subset of the |
meta |
list: either the |
type |
string: currently "youtube", "twitch", or "local". If |
extra_cols |
character: names of additional columns from |
normalize_paths |
logical: if |
A data.frame with columns src
, start_time
, duration
, plus any extras specified in extra_cols
By default, all skills except reception have a timing of c(-5, 3)
, meaning that the video clip will start 5 seconds before the recorded time of the event and end 3 seconds after its recorded time. Reception has a timing of c(-2, 6)
(because reception usually has the same timestamp as the serve)
ov_video_timing(...) ov_video_timing_df(x)
ov_video_timing(...) ov_video_timing_df(x)
... |
: named parameters that will override the defaults. Each parameter should be a two-element numeric vector |
x |
data.frame: a data.frame of timings that will override the defaults, with columns |
ov_video_timing_df
accepts and returns a data.frame rather than a named list. The data.frame format also allows timings to be differentiated by play phase ("Reception" vs "Transition").
For ov_video_timing
a named list, with names corresponding to skills ("Serve", "Reception", etc). For ov_video_timing_df
, a data.frame with columns skill
, phase
, start_offset
, and duration
## defaults ov_video_timing() ## with different settings for serve and reception ov_video_timing(serve = c(-2, 2), reception = c(-3, 1)) ## as data.frame ov_video_timing_df(data.frame(skill = "Set", phase = "Transition", start_offset = -5, duration = 10))
## defaults ov_video_timing() ## with different settings for serve and reception ov_video_timing(serve = c(-2, 2), reception = c(-3, 1)) ## as data.frame ov_video_timing_df(data.frame(skill = "Set", phase = "Transition", start_offset = -5, duration = 10))
Playlists and other video support functions from volleyball match files.
Maintainer: Ben Raymond [email protected]
Authors:
Adrien Ickowicz
Other contributors:
openvolley.org [originator]
Useful links: