Classes and functions for reading and writing camera streams.
A camera may be used to document participant responses on video or used by the experimenter to create movie stimuli or instructions.
|
Class for displaying and recording video from a USB/PCI connected camera. |
Get permission to access the camera. |
|
Is the camera ready (bool)? |
|
Size of the video frame obtained from recent metadata (float or None). |
|
True if the video is presently recording (bool). |
|
True if the stream may not have started yet (bool). |
|
True if the recording has stopped (bool). |
|
Video metadata retrieved during the last frame update (MovieMetadata). |
|
Get stream metadata. |
|
|
Get information about installed cameras on this system. |
|
Get a mapping or list of camera descriptions. |
Camera to use (str or None). |
|
Microphone to record audio samples from during recording ( |
|
Current stream time in seconds (float). |
|
Current recording timestamp (float). |
|
Current size of the recording in bytes (int). |
|
Open the camera stream and begin decoding frames (if available). |
|
Start recording frames. |
|
Stop recording frames and audio (if available). |
|
Close the camera. |
|
|
Save the last recording to file. |
File path to the last recording (str or None). |
|
Most recent frame pulled from the camera (VideoFrame) since the last call of getVideoFrame. |
|
Acquire the newest data from the camera stream. |
|
Pull the most recent frame from the stream (if available). |
|
Information about a specific operating mode for a camera attached to the system. |
Camera index (int). |
|
Camera name (str). |
|
Resolution (w, h) in pixels (ArrayLike or None). |
|
Frame rate (float) or range (ArrayLike). |
|
Video pixel format (str). |
|
Codec format, may be used instead of pixelFormat for some configurations. |
|
Camera library these settings are targeted towards (str). |
|
Camera API in use to obtain this information (str). |
|
Get image size as as formatted string. |
|
Get a description as a string. |
Class for displaying and recording video from a USB/PCI connected camera.
This class is capable of opening, recording, and saving camera video streams to disk. Camera stream reading/writing is done in a separate thread, allowing capture to occur in the background while the main thread is free to perform other tasks. This allows for capture to occur at higher frame rates than the display refresh rate. Audio recording is also supported if a microphone interface is provided, where recording will be synchronized with the video stream (as best as possible). Video and audio can be saved to disk either as a single file or as separate files.
GNU/Linux is supported only by the OpenCV backend (cameraLib=’opencv’).
device (str or int) – Camera to open a stream with. If the ID is not valid, an error will be raised when open() is called. Value can be a string or number. String values are platform-dependent: a DirectShow URI or camera name on Windows, or a camera name/index on MacOS. Specifying a number (>=0) is a platform-independent means of selecting a camera. PsychoPy enumerates possible camera devices and makes them selectable without explicitly having the name of the cameras attached to the system. Use caution when specifying an integer, as the same index may not reference the same camera every time.
mic (Microphone
or None) – Microphone to record audio samples from during recording. The microphone
input device must not be in use when record() is called. The audio
track will be merged with the video upon calling save(). Make sure
that Microphone.maxRecordingSize is specified to a reasonable value to
prevent the audio track from being truncated. Specifying a microphone
adds some latency to starting and stopping camera recording due to the
added overhead involved with synchronizing the audio and video streams.
frameRate (int or None) – Frame rate to record the camera stream at. If None, the camera’s default frame rate will be used.
frameSize (tuple or None) – Size (width, height) of the camera stream frames to record. If None, the camera’s default frame size will be used.
cameraLib (str) – Interface library (backend) to use for accessing the camera. May either be ffpyplayer or opencv. If None, the default library for the recommended by the PsychoPy developers will be used. Switching camera libraries could help resolve issues with camera compatibility. More camera libraries may be installed via extension packages.
bufferSecs (float) – Size of the real-time camera stream buffer specified in seconds (only valid on Windows and MacOS). This is not the same as the recording buffer size. This option might not be available for all camera libraries.
win (Window
or None) – Optional window associated with this camera. Some functionality may
require an OpenGL context for presenting frames to the screen. If you
are not planning to display the camera stream, this parameter can be
safely ignored.
name (str) – Label for the camera for logging purposes.
Examples
Opening a camera stream and closing it:
camera = Camera(device=0)
camera.open() # exception here on invalid camera
camera.close()
Recording 5 seconds of video and saving it to disk:
cam = Camera(0)
cam.open()
cam.record() # starts recording
while cam.recordingTime < 5.0: # record for 5 seconds
if event.getKeys('q'):
break
cam.update()
cam.stop() # stops recording
cam.save('myVideo.mp4')
cam.close()
Providing a microphone as follows enables audio recording:
mic = Microphone(0)
cam = Camera(0, mic=mic)
Overriding the default frame rate and size (if cameraLib supports it):
cam = Camera(0, frameRate=30, frameSize=(640, 480), cameraLib=u'opencv')
Assert that the camera is ready. Raises a CameraNotReadyError if the camera is not ready.
Assert that we have a media player instance open.
This will raise a RuntimeError if there is no player open. Use this function to ensure that a player is present before running subsequent code.
Download video file to an online repository. Not implemented locally, needed for auto translate to JS.
Pull waiting frames from the capture thread.
This function will pull frames from the capture thread and add them to the buffer. The last frame in the buffer will be set as the most recent frame (lastFrame).
True if a frame has been enqueued. Returns False if the camera is not ready or if the stream was closed.
Upload video file to an online repository. Not implemented locally, needed for auto translate to JS.
Close the camera.
This will close the camera stream and free up any resources used by the device. If the camera is currently recording, this will stop the recording, but will not discard any frames. You may still call save() to save the frames to disk.
Camera to use (str or None).
String specifying the name of the camera to open a stream with. This must be set prior to calling start(). If the name is not valid, an error will be raised when start() is called.
Number of frames captured in the present recording (int).
Frame rate of the video stream (float or None).
Only valid after an open() and successive _enqueueFrame() call as metadata needs to be obtained from the stream. Returns None if not valid.
Size of the video frame obtained from recent metadata (float or None).
Only valid after an open() and successive _enqueueFrame() call as metadata needs to be obtained from the stream. Returns None if not valid.
Get a mapping or list of camera descriptions.
Camera descriptions are a compact way of representing camera settings and formats. Description strings can be used to specify which camera device and format to use with it to the Camera class.
Descriptions have the following format (example):
'[Live! Cam Sync 1080p] 160x120@30fps, mjpeg'
This shows a specific camera format for the ‘Live! Cam Sync 1080p’ webcam which supports 160x120 frame size at 30 frames per second. The last value is the codec or pixel format used to decode the stream. Different pixel formats and codecs vary in performance.
collapse (bool) – Return camera information as string descriptions instead of CameraInfo objects. This provides a more compact way of representing camera formats in a (reasonably) human-readable format.
Mapping (dict) of camera descriptions, where keys are camera names (str) and values are a list of format description strings associated with the camera. If collapse=True, all descriptions will be returned in a single flat list. This might be more useful for specifying camera formats from a single GUI list control.
Get information about installed cameras on this system.
Mapping of camera information objects.
File path to the last saved recording.
This value is only valid if a previous recording has been saved to disk (save() was called).
Path to the file the most recent call to save() created. Returns None if no file is ready.
str or None
Get stream metadata.
Metadata about the video stream, retrieved during the last frame update (_enqueueFrame call).
MovieMetadata
Pull the most recent frame from the stream (if available).
Most recent video frame. Returns NULL_MOVIE_FRAME_INFO if no frame was available, or we timed out.
MovieFrame
True if the stream may not have started yet (bool). This status is given before open() or after close() has been called on this object.
Is the camera ready (bool)?
The camera is ready when the following conditions are met. First, we’ve created a player interface and opened it. Second, we have received metadata about the stream. At this point we can assume that the camera is ‘hot’ and the stream is being read.
This is a legacy property used to support older versions of PsychoPy. The isOpened property should be used instead.
True if the video is presently recording (bool).
True if the stream has started (bool). This status is given after open() has been called on this object.
True if the recording has stopped (bool). This does not mean that the stream has stopped, getVideoFrame() will still yield frames until close() is called.
File path to the last recording (str or None).
This value is only valid if a previous recording has been saved successfully (save() was called), otherwise it will be set to None.
Most recent frame pulled from the camera (VideoFrame) since the last call of getVideoFrame.
Video metadata retrieved during the last frame update (MovieMetadata).
Microphone to record audio samples from during recording
(Microphone
or None).
If None, no audio will be recorded. Cannot be set after opening a camera stream.
Open the camera stream and begin decoding frames (if available).
This function returns when the camera is ready to start getting frames.
Call record() to start recording frames to memory. Captured frames came be saved to disk using save().
Start recording frames.
This function will start recording frames and audio (if available). The value of lastFrame will be updated as new frames arrive and the frameCount will increase. You can access image data for the most recent frame to be captured using lastFrame.
If this is called before open() the camera stream will be opened automatically. This is not recommended as it may incur a longer than expected delay in the recording start time.
Warning
If a recording has been previously made without calling save() it will be discarded if record() is called again.
Current size of the recording in bytes (int).
Current recording timestamp (float).
This returns the timestamp of the last frame captured in the recording.
This value increases monotonically from the last record() call. It will reset once stop() is called. This value is invalid outside record() and stop() calls.
Save the last recording to file.
This will write frames to filename acquired since the last call of record() and subsequent stop(). If record() is called again before save(), the previous recording will be deleted and lost.
This is a slow operation and will block for some time depending on the length of the video. This can be sped up by setting useThreads=True.
filename (str) – File to save the resulting video to, should include the extension.
useThreads (bool) – Use threading where possible to speed up the saving process. If True, the video will be saved and composited in a separate thread and this function will return quickly. If False, the video will be saved and composited in the main thread and this function will block until the video is saved. Default is True.
mergeAudio (bool) – Merge the audio track from the microphone with the video. If True, the audio track will be merged with the video. If False, the audio track will be saved to a separate file. Default is True.
encoderLib (str or None) – Encoder library to use for saving the video. This can be either ‘ffpyplayer’ or ‘opencv’. If None, the same library that was used to open the camera stream. Default is None.
encoderOpts (dict) – Options to pass to the encoder. This is a dictionary of options specific to the encoder library being used. See the documentation for ~psychopy.tools.movietools.MovieFileWriter for more details.
Current stream time in seconds (float). This time increases monotonically from startup.
This is -1.0 if there is no active stream running or if the backend does not support this feature.
Acquire the newest data from the camera stream. If the Camera object is not being monitored by a ImageStim, this must be explicitly called.
Window which frames are being presented (psychopy.visual.Window or None).
Information about a specific operating mode for a camera attached to the system.
index (int) – Index of the camera. This is the enumeration for the camera which is used to identify and select it by the cameraLib. This value may differ between operating systems and the cameraLib being used.
name (str) – Camera name retrieved by the OS. This may be a human-readable name (i.e. DirectShow on Windows), an index on MacOS or a path (e.g., /dev/video0 on Linux). If the cameraLib does not support this feature, then this value will be generated.
frameSize (ArrayLike) – Resolution of the frame (w, h) in pixels.
frameRate (ArrayLike) – Allowable framerate for this camera mode.
pixelFormat (str) – Pixel format for the stream. If u’Null’, then codecFormat is being used to configure the camera.
codecFormat (str) – Codec format for the stream. If u’Null’, then pixelFormat is being used to configure the camera. Usually this value is used for high-def stream formats.
cameraLib (str) – Library used to access the camera. This can be either, ‘ffpyplayer’, ‘opencv’.
cameraAPI (str) – API used to access the camera. This relates to the external interface being used by cameraLib to access the camera. This value can be: ‘AVFoundation’, ‘DirectShow’ or ‘Video4Linux2’.
Camera API in use to obtain this information (str).
Camera library these settings are targeted towards (str).
Codec format, may be used instead of pixelFormat for some configurations. Default is ‘’.
Get a description as a string.
For all backends, this value is guaranteed to be valid after the camera has been opened. Some backends may be able to provide this information before the camera is opened.
Description of the camera format as a human readable string.
Frame rate (float) or range (ArrayLike).
Depends on the backend being used. If a range is provided, then the first value is the maximum and the second value is the minimum frame rate.
Resolution (w, h) in pixels (ArrayLike or None).
Get image size as as formatted string.
Size formatted as ‘WxH’ (e.g. ‘480x320’).
Camera index (int). This is the enumerated index of this camera.
Camera name (str). This is the camera name retrieved by the OS.
Video pixel format (str). An empty string indicates this field is not initialized.