flash.mediaID3Info The ID3Info class contains properties that reflect ID3 metadata.Object The ID3Info class contains properties that reflect ID3 metadata. You can get additional metadata for MP3 files by accessing the id3 property of the Sound class; for example, mySound.id3.TIME. For more information, see the entry for Sound.id3 and the ID3 tag definitions at http://www.id3.org. Sound.id3album The name of the album; corresponds to the ID3 2.0 tag TALB.String The name of the album; corresponds to the ID3 2.0 tag TALB. artist The name of the artist; corresponds to the ID3 2.0 tag TPE1.String The name of the artist; corresponds to the ID3 2.0 tag TPE1. comment A comment about the recording; corresponds to the ID3 2.0 tag COMM.String A comment about the recording; corresponds to the ID3 2.0 tag COMM. genre The genre of the song; corresponds to the ID3 2.0 tag TCON.String The genre of the song; corresponds to the ID3 2.0 tag TCON. songName The name of the song; corresponds to the ID3 2.0 tag TIT2.String The name of the song; corresponds to the ID3 2.0 tag TIT2. track The track number; corresponds to the ID3 2.0 tag TRCK.String The track number; corresponds to the ID3 2.0 tag TRCK. year The year of the recording; corresponds to the ID3 2.0 tag TYER.String The year of the recording; corresponds to the ID3 2.0 tag TYER. MediaType The MediaType class enumerates the general types of media that can be returned by a camera.Object The MediaType class enumerates the general types of media that can be returned by a camera.

Use the constants defined in this class as input to the launch() method of the CameraUI class. MediaType values are also used in the mediaType property of the MediaPromise class.

CameraUI.launch()MediaPromise.mediaTypeIMAGE A single image.imageString A single image. VIDEO A video.videoString A video.
scanHardware Forces a rescan of the microphones and cameras on the system. Forces a rescan of the microphones and cameras on the system. Camera Use the Camera class to capture video from the client system's camera.flash.events:EventDispatcher Use the Camera class to capture video from the client system's camera. Use the Video class to monitor the video locally. Use the NetConnection and NetStream classes to transmit the video to Flash Media Server. Flash Media Server can send the video stream to other servers and broadcast it to other clients running Flash Player.

A Camera instance captures video in landscape aspect ratio. On devices that can change the screen orientation, such as mobile phones, a Video object attached to the camera will only show upright video in a landscape-aspect orientation. Thus, mobile apps should use a landscape orientation when displaying video and should not auto-rotate.

As of AIR 2.6, autofocus is enabled automatically on mobile devices with an autofocus camera. If the camera does not support continuous autofocus, and many mobile device cameras do not, then the camera is focused when the Camera object is attached to a video stream and whenever the setMode() method is called. On desktop computers, autofocus behavior is dependent on the camera driver and settings.

In an AIR application on Android and iOS, the camera does not capture video while an AIR app is not the active, foreground application. In addition, streaming connections can be lost when the application is in the background. On iOS, the camera video cannot be displayed when an application uses the GPU rendering mode. The camera video can still be streamed to a server.

Mobile Browser Support: This class is not supported in mobile browsers.

AIR profile support: This feature is supported on desktop operating systems, but it is not supported on all mobile devices. It is not supported on AIR for TV devices. See AIR Profile Support for more information regarding API support across multiple profiles.

You can test for support at run time using the Camera.isSupported property. Note that for AIR for TV devices, Camera.isSupported is true but Camera.getCamera() always returns null.

For information about capturing audio, see the Microphone class.

Important: Flash Player displays a Privacy dialog box that lets the user choose whether to allow or deny access to the camera. Make sure your application window size is at least 215 x 138 pixels; this is the minimum size required to display the dialog box.

To create or reference a Camera object, use the getCamera() method.

The following example shows the image from a camera after acknowledging the security warning. The Stage is set such that it cannot be scaled and is aligned to the top-left of the player window. The activity event is dispatched at the start and end (if any) of the session and is captured by the activityHandler() method, which prints out information about the event.

Note: A camera must be attached to your computer for this example to work correctly.

package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.*; import flash.media.Camera; import flash.media.Video; public class CameraExample extends Sprite { private var video:Video; public function CameraExample() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; var camera:Camera = Camera.getCamera(); if (camera != null) { camera.addEventListener(ActivityEvent.ACTIVITY, activityHandler); video = new Video(camera.width * 2, camera.height * 2); video.attachCamera(camera); addChild(video); } else { trace("You need a camera."); } } private function activityHandler(event:ActivityEvent):void { trace("activityHandler: " + event); } } }
flash.media.MicrophoneCristophe Coenraets: Video Chat for Android in 30 Lines of CodeMichael Chaize: Android, AIR, and the Camerastatus Dispatched when a camera reports its status.flash.events.StatusEvent.STATUSflash.events.StatusEvent Dispatched when a camera reports its status. Before accessing a camera, Flash Player displays a Privacy dialog box to let users allow or deny access to their camera. If the value of the code property is "Camera.Muted", the user has refused to allow the SWF file access to the user's camera. If the value of the code property is "Camera.Unmuted", the user has allowed the SWF file access to the user's camera. Camera.getCamera()activity Dispatched when a camera begins or ends a session.flash.events.ActivityEvent.ACTIVITYflash.events.ActivityEvent Dispatched when a camera begins or ends a session. Call Camera.setMotionLevel() to specify the amount of motion required to trigger an activity event with an activating value of true, or the time without activity that must elapse before triggering an activity event with an activating value of false. getCamera Returns a reference to a Camera object for capturing video.If the name parameter is not specified, this method returns a reference to the default camera or, if it is in use by another application, to the first available camera. (If there is more than one camera installed, the user may specify the default camera in the Flash Player Camera Settings panel.) If no cameras are available or installed, the method returns null. flash.media:CameranameStringnullSpecifies which camera to get, as determined from the array returned by the names property. For most applications, get the default camera by omitting this parameter. To specify a value for this parameter, use the string representation of the zero-based index position within the Camera.names array. For example, to specify the third camera in the array, use Camera.getCamera("2"). Returns a reference to a Camera object for capturing video. To begin capturing the video, you must attach the Camera object to a Video object (see Video.attachCamera() ). To transmit video to Flash Media Server, call NetStream.attachCamera() to attach the Camera object to a NetStream object.

Multiple calls to the getCamera() method reference the same camera driver. Thus, if your code contains code like firstCam:Camera = getCamera() and secondCam:Camera = getCamera(), both firstCam and secondCam reference the same camera, which is the user's default camera.

On iOS devices with a both a front- and a rear-facing camera, you can only capture video from one camera at a time. On Android devices, you can only access the rear-facing camera.

In general, you shouldn't pass a value for the name parameter; simply use getCamera() to return a reference to the default camera. By means of the Camera settings panel (discussed later in this section), the user can specify the default camera to use.

You can't use ActionScript to set a user's Allow or Deny permission setting for access to the camera, but you can display the Adobe Flash Player Settings camera setting dialog box where the user can set the camera permission. When a SWF file using the attachCamera() method tries to attach the camera returned by the getCamera() method to a Video or NetStream object, Flash Player displays a dialog box that lets the user choose to allow or deny access to the camera. (Make sure your application window size is at least 215 x 138 pixels; this is the minimum size Flash Player requires to display the dialog box.) When the user responds to the camera setting dialog box, Flash Player returns an information object in the status event that indicates the user's response: Camera.muted indicates the user denied access to a camera; Camera.Unmuted indicates the user allowed access to a camera. To determine whether the user has denied or allowed access to the camera without handling the status event, use the muted property.

In Flash Player, the user can specify permanent privacy settings for a particular domain by right-clicking (Windows and Linux) or Control-clicking (Macintosh) while a SWF file is playing, selecting Settings, opening the Privacy dialog, and selecting Remember. If the user selects Remember, Flash Player no longer asks the user whether to allow or deny SWF files from this domain access to your camera.

Note: The attachCamera() method will not invoke the dialog box to Allow or Deny access to the camera if the user has denied access by selecting Remember in the Flash Player Settings dialog box. In this case, you can prompt the user to change the Allow or Deny setting by displaying the Flash Player Privacy panel for the user using Security.showSettings(SecurityPanel.PRIVACY).

If getCamera() returns null, either the camera is in use by another application, or there are no cameras installed on the system. To determine whether any cameras are installed, use the names.length property. To display the Flash Player Camera Settings panel, which lets the user choose the camera to be referenced by getCamera(), use Security.showSettings(SecurityPanel.CAMERA).

Scanning the hardware for cameras takes time. When the runtime finds at least one camera, the hardware is not scanned again for the lifetime of the player instance. However, if the runtime doesn't find any cameras, it will scan each time getCamera is called. This is helpful if the camera is present but is disabled; if your SWF file provides a Try Again button that calls getCamera, Flash Player can find the camera without the user having to restart the SWF file.

In the following example, after the user allows access to the camera, the attached camera is used to capture video images. Information about the video stream, such as the current frames per second, is also displayed.

The Camera.getCamera() method returns a reference to a camera object, or returns null if no camera is available or installed. The if statement checks whether the camera was found and whether the user allowed access to the camera. If the user denied access, the muted property is set to true.

Usually, when the attachCamera() method is invoked, a dialog box appears and prompts the user to allow or deny Flash Player access to the camera. However, if the user denied access and selected the Remember option, the dialog box does not appear and nothing displays. To make sure the user has the option to allow access to the camera, the myTextField text field instructs the user to click the text field to invoke the Flash Player Settings dialog box.

The clickHandler() method calls Security.showSettings() method, which displays the PRIVACY panel of the Settings dialog box. If the user allows access, the StatusEvent.STATUS event is dispatched and the value of the event's code property is set to Camera.Unmuted. (The camera object's mute property is also set to false.)

The statusHandler() method, added to listen to the status change of the user's setting, invokes the connectCamera() method, if the user allows access. The connectCamera() method instantiates a video object with the captured stream's width and height. To display the camera's captured video, the reference to the video stream is attached to the video object, and the video object is added to the display list.

A Timer object also is started. Every second, a Timer object's timer event is dispatched and the timerHandler() method is invoked. The timerHandler() method is displayed and updates a number of properties of the Camera object.

Note: For this example, the only property that changes is the currentFPS property.

package { import flash.display.Sprite; import flash.media.Camera; import flash.media.Video; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.utils.Timer; import flash.events.TimerEvent; import flash.events.StatusEvent; import flash.events.MouseEvent; import flash.system.SecurityPanel; import flash.system.Security; public class Camera_getCameraExample extends Sprite { private var myTextField:TextField; private var cam:Camera; private var t:Timer = new Timer(1000); public function Camera_getCameraExample() { myTextField = new TextField(); myTextField.x = 10; myTextField.y = 10; myTextField.background = true; myTextField.selectable = false; myTextField.autoSize = TextFieldAutoSize.LEFT; if (Camera.isSupported) { cam = Camera.getCamera(); if (!cam) { myTextField.text = "No camera is installed."; } else if (cam.muted) { myTextField.text = "To enable the use of the camera,\n" + "please click on this text field.\n" + "When the Flash Player Settings dialog appears,\n" + "make sure to select the Allow radio button\n" + "to grant access to your camera."; myTextField.addEventListener(MouseEvent.CLICK, clickHandler); }else { myTextField.text = "Connecting"; connectCamera(); } addChild(myTextField); t.addEventListener(TimerEvent.TIMER, timerHandler); }else { myTextField.text = "The Camera class is not supported on this device."; } } private function clickHandler(e:MouseEvent):void { Security.showSettings(SecurityPanel.PRIVACY); cam.addEventListener(StatusEvent.STATUS, statusHandler); myTextField.removeEventListener(MouseEvent.CLICK, clickHandler); } private function statusHandler(event:StatusEvent):void { if (event.code == "Camera.Unmuted") { connectCamera(); cam.removeEventListener(StatusEvent.STATUS, statusHandler); } } private function connectCamera():void { var vid:Video = new Video(cam.width, cam.height); vid.x = 10; vid.y = 10; vid.attachCamera(cam); addChild(vid); t.start(); } private function timerHandler(event:TimerEvent):void { myTextField.y = cam.height + 20; myTextField.text = ""; myTextField.appendText("bandwidth: " + cam.bandwidth + "\n"); myTextField.appendText("currentFPS: " + Math.round(cam.currentFPS) + "\n"); myTextField.appendText("fps: " + cam.fps + "\n"); myTextField.appendText("keyFrameInterval: " + cam.keyFrameInterval + "\n"); } } }
indexmutednamessetMode()statusVideo.attachCamera()statusflash.events:StatusEventDispatched when a camera reports its status. Before accessing a camera, Flash Player displays a Privacy dialog box to let users allow or deny access to their camera. If the value of the code property is "Camera.muted", the user has refused to allow the SWF file access to the user's camera. If the value of the code property is "Camera.Unmuted", the user has allowed the SWF file access to the user's camera. Dispatched when a camera reports its status.
setKeyFrameInterval Specifies which video frames are transmitted in full (called keyframes) instead of being interpolated by the video compression algorithm.keyFrameIntervalintA value that specifies which video frames are transmitted in full (as keyframes) instead of being interpolated by the video compression algorithm. A value of 1 means that every frame is a keyframe, a value of 3 means that every third frame is a keyframe, and so on. Acceptable values are 1 through 48. Specifies which video frames are transmitted in full (called keyframes) instead of being interpolated by the video compression algorithm. This method is applicable only if you are transmitting video using Flash Media Server.

The Flash Video compression algorithm compresses video by transmitting only what has changed since the last frame of the video; these portions are considered to be interpolated frames. Frames of a video can be interpolated according to the contents of the previous frame. A keyframe, however, is a video frame that is complete; it is not interpolated from prior frames.

To determine how to set a value for the keyFrameInterval parameter, consider both bandwidth use and video playback accessibility. For example, specifying a higher value for keyFrameInterval (sending keyframes less frequently) reduces bandwidth use. However, this may increase the amount of time required to position the playhead at a particular point in the video; more prior video frames may have to be interpolated before the video can resume.

Conversely, specifying a lower value for keyFrameInterval (sending keyframes more frequently) increases bandwidth use because entire video frames are transmitted more often, but may decrease the amount of time required to seek a particular video frame within a recorded video.

keyFrameInterval
setLoopback Specifies whether to use a compressed video stream for a local view of the camera.compressBooleanfalseSpecifies whether to use a compressed video stream (true) or an uncompressed stream (false) for a local view of what the camera is receiving. Specifies whether to use a compressed video stream for a local view of the camera. This method is applicable only if you are transmitting video using Flash Media Server; setting compress to true lets you see more precisely how the video will appear to users when they view it in real time.

Although a compressed stream is useful for testing purposes, such as previewing video quality settings, it has a significant processing cost, because the local view is not simply compressed; it is compressed, edited for transmission as it would be over a live connection, and then decompressed for local viewing.

To set the amount of compression used when you set compress to true, use Camera.setQuality().

setQuality()
setMode Sets the camera capture mode to the native mode that best meets the specified requirements.widthintThe requested capture width, in pixels. The default value is 160. heightintThe requested capture height, in pixels. The default value is 120. fpsNumberThe requested rate at which the camera should capture data, in frames per second. The default value is 15. favorAreaBooleantrueSpecifies whether to manipulate the width, height, and frame rate if the camera does not have a native mode that meets the specified requirements. The default value is true, which means that maintaining capture size is favored; using this parameter selects the mode that most closely matches width and height values, even if doing so adversely affects performance by reducing the frame rate. To maximize frame rate at the expense of camera height and width, pass false for the favorArea parameter. Sets the camera capture mode to the native mode that best meets the specified requirements. If the camera does not have a native mode that matches all the parameters you pass, Flash Player selects a capture mode that most closely synthesizes the requested mode. This manipulation may involve cropping the image and dropping frames.

By default, Flash Player drops frames as needed to maintain image size. To minimize the number of dropped frames, even if this means reducing the size of the image, pass false for the favorArea parameter.

When choosing a native mode, Flash Player tries to maintain the requested aspect ratio whenever possible. For example, if you issue the command myCam.setMode(400, 400, 30), and the maximum width and height values available on the camera are 320 and 288, Flash Player sets both the width and height at 288; by setting these properties to the same value, Flash Player maintains the 1:1 aspect ratio you requested.

To determine the values assigned to these properties after Flash Player selects the mode that most closely matches your requested values, use the width, height, and fps properties.

If you are using Flash Media Server, you can also capture single frames or create time-lapse photography. For more information, see NetStream.attachCamera().

In the following example, when a user clicks on the Stage, the video is resized and the frames per second capture rate is set to a new value.

The Stage is set so it does not scale. The Camera.getCamera() method returns a reference to a camera object, or returns null if no camera is available or installed. If a camera exists, the connectCamera() method is called. The connectCamera() method instantiates a video object. To display the camera's captured video, the reference to the video stream is attached to the video object, and the video object is added to the display list. An event listener also is set for a MouseEvent.CLICK event. After the user clicks on the Stage, the clickHandler() method is invoked. The method checks the width of the captured video and sets the camera capture mode's width, height, and the frame per second request rate. In order for these setting to take effect, the video object must be removed and re-created. The video's width and height also must be set to the camera object's width and height.

package { import flash.display.Sprite; import flash.media.Camera; import flash.media.Video; import flash.events.MouseEvent; import flash.display.StageScaleMode; public class Camera_setModeExample extends Sprite { private var cam:Camera; private var vid:Video; public function Camera_setModeExample() { stage.scaleMode = StageScaleMode.NO_SCALE; cam = Camera.getCamera(); if (!cam) { trace("No camera is installed."); }else { connectCamera(); } } private function connectCamera():void { vid = new Video(); vid.width = cam.width; vid.height = cam.height; vid.attachCamera(cam); addChild(vid); stage.addEventListener(MouseEvent.CLICK, clickHandler); } private function clickHandler(e:MouseEvent):void { switch (cam.width) { case 160: cam.setMode(320, 240, 10); break; case 320: cam.setMode(640, 480, 5); break; default: cam.setMode(160, 120, 15); break; } removeChild(vid); connectCamera(); } } }
fpsheightwidthflash.net.NetStream.attachCamera()
setMotionLevel Specifies how much motion is required to dispatch the activity event.motionLevelintSpecifies the amount of motion required to dispatch the activity event. Acceptable values range from 0 to 100. The default value is 50. timeoutint2000Specifies how many milliseconds must elapse without activity before Flash Player considers activity to have stopped and dispatches the activity event. The default value is 2000 milliseconds (2 seconds). Specifies how much motion is required to dispatch the activity event. Optionally sets the number of milliseconds that must elapse without activity before Flash Player considers motion to have stopped and dispatches the event.

Note: Video can be displayed regardless of the value of the motionLevel parameter. This parameter determines only when and under what circumstances the event is dispatched—not whether video is actually being captured or displayed.

To prevent the camera from detecting motion at all, pass a value of 100 for the motionLevel parameter; the activity event is never dispatched. (You would probably use this value only for testing purposes—for example, to temporarily disable any handlers that would normally be triggered when the event is dispatched.)

To determine the amount of motion the camera is currently detecting, use the activityLevel property. Motion sensitivity values correspond directly to activity values. Complete lack of motion is an activity value of 0. Constant motion is an activity value of 100. Your activity value is less than your motion sensitivity value when you're not moving; when you are moving, activity values frequently exceed your motion sensitivity value.

This method is similar in purpose to the Microphone.setSilenceLevel() method; both methods are used to specify when the activity event should be dispatched. However, these methods have a significantly different impact on publishing streams:

  • Microphone.setSilenceLevel() is designed to optimize bandwidth. When an audio stream is considered silent, no audio data is sent. Instead, a single message is sent, indicating that silence has started.
  • Camera.setMotionLevel() is designed to detect motion and does not affect bandwidth usage. Even if a video stream does not detect motion, video is still sent.
In the following example, the user's camera is used as a monitor or a surveillance camera. The camera detects motion and a text field shows the activity level. (The example can be extended to sound an alarm or send a message through a web service to other applications.)

The Camera.getCamera() method returns a reference to a camera object, or returns null if no camera is available or installed. The if statement checks whether a camera is available, and invokes the connectCamera() method when it is available. The connectCamera() method instantiates a video object with the captured stream's width and height. To display the camera's captured video, the reference to the video stream is attached to the video object, and the video object is added to the display list. (Usually, when the attachCamera() method is invoked, a dialog box appears and prompts the user to allow or deny Flash Player access to the camera. However, if the user denied access and selected the Remember option, the dialog box does not appear and nothing is displayed. To make sure the user has the option to allow access to the camera, use the system.Security.showSettings() method to invoke the Flash Player Settings dialog box.)

The setMotionLevel() method sets the level of activity (amount of motion), before the activity event is invoked, to five, for minimal motion. The time between when the camera stops detecting motion and when the activity event is invoked, is set to 1 second (1000 millisecond). After 1 second passes without activity or the level of activity reaches five, the ActivityEvent.ACTIVITY event is dispatched and the activityHandler() method is invoked. If the event was triggered by the level of activity, the activating property is set to true and a Timer object is started. Every second, a Timer object’s timer event is dispatched and the timerHandler() method is invoked, which displays the current level of activity. (Although a level of five or larger triggers the timer, the displayed current level of activity might be a smaller number.)

package { import flash.display.Sprite; import flash.media.Camera; import flash.media.Video; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.utils.Timer; import flash.events.TimerEvent; import flash.events.ActivityEvent; public class Camera_setMotionLevelExample extends Sprite { private var myTextField:TextField; private var cam:Camera; private var t:Timer = new Timer(1000); public function Camera_setMotionLevelExample() { myTextField = new TextField(); myTextField.background = true; myTextField.selectable = false; myTextField.autoSize = TextFieldAutoSize.LEFT; cam = Camera.getCamera(); if (!cam) { myTextField.text = "No camera is installed."; }else { myTextField.text = "Waiting to connect."; connectCamera(); } addChild(myTextField); t.addEventListener(TimerEvent.TIMER, timerHandler); } private function connectCamera():void { var vid:Video = new Video(cam.width, cam.height); vid.x = 10; vid.y = 10; vid.attachCamera(cam); addChild(vid); cam.setMotionLevel(5, 1000); cam.addEventListener(ActivityEvent.ACTIVITY, activityHandler); } private function activityHandler(e:ActivityEvent):void { if (e.activating == true) { t.start(); } else { myTextField.text = "Everything is quiet."; t.stop(); } } private function timerHandler(event:TimerEvent):void { myTextField.x = 10; myTextField.y = cam.height + 20; myTextField.text = "There is some activity. Level: " + cam.activityLevel; } } }
motionLevelmotionTimeoutMicrophone.setSilenceLevel()
setQuality Sets the maximum amount of bandwidth per second or the required picture quality of the current outgoing video feed.bandwidthintSpecifies the maximum amount of bandwidth that the current outgoing video feed can use, in bytes per second. To specify that Flash Player video can use as much bandwidth as needed to maintain the value of quality, pass 0 for bandwidth. The default value is 16384. qualityintAn integer that specifies the required level of picture quality, as determined by the amount of compression being applied to each video frame. Acceptable values range from 1 (lowest quality, maximum compression) to 100 (highest quality, no compression). To specify that picture quality can vary as needed to avoid exceeding bandwidth, pass 0 for quality. Sets the maximum amount of bandwidth per second or the required picture quality of the current outgoing video feed. This method is generally applicable only if you are transmitting video using Flash Media Server.

Use this method to specify which element of the outgoing video feed is more important to your application—bandwidth use or picture quality.

  • To indicate that bandwidth use takes precedence, pass a value for bandwidth and 0 for quality. Flash Player transmits video at the highest quality possible within the specified bandwidth. If necessary, Flash Player reduces picture quality to avoid exceeding the specified bandwidth. In general, as motion increases, quality decreases.
  • To indicate that quality takes precedence, pass 0 for bandwidth and a numeric value for quality. Flash Player uses as much bandwidth as required to maintain the specified quality. If necessary, Flash Player reduces the frame rate to maintain picture quality. In general, as motion increases, bandwidth use also increases.
  • To specify that both bandwidth and quality are equally important, pass numeric values for both parameters. Flash Player transmits video that achieves the specified quality and that doesn't exceed the specified bandwidth. If necessary, Flash Player reduces the frame rate to maintain picture quality without exceeding the specified bandwidth.
getCamera()quality
activityLevel The amount of motion the camera is detecting.Number The amount of motion the camera is detecting. Values range from 0 (no motion is being detected) to 100 (a large amount of motion is being detected). The value of this property can help you determine if you need to pass a setting to the setMotionLevel() method.

If the camera is available but is not yet being used because the Video.attachCamera() method has not been called, this property is set to -1.

If you are streaming only uncompressed local video, this property is set only if you have assigned a function to the event handler. Otherwise, it is undefined.

motionLevelsetMotionLevel()
bandwidth The maximum amount of bandwidth the current outgoing video feed can use, in bytes.int The maximum amount of bandwidth the current outgoing video feed can use, in bytes. A value of 0 means the feed can use as much bandwidth as needed to maintain the desired frame quality.

To set this property, use the setQuality() method.

setQuality()
currentFPS The rate at which the camera is capturing data, in frames per second.Number The rate at which the camera is capturing data, in frames per second. This property cannot be set; however, you can use the setMode() method to set a related property—fps—which specifies the maximum frame rate at which you would like the camera to capture data. setMode()fps The maximum rate at which the camera can capture data, in frames per second.Number The maximum rate at which the camera can capture data, in frames per second. The maximum rate possible depends on the capabilities of the camera; this frame rate may not be achieved.
  • To set a desired value for this property, use the setMode() method.
  • To determine the rate at which the camera is currently capturing data, use the currentFPS property.
currentFPSsetMode()
height The current capture height, in pixels.int The current capture height, in pixels. To set a value for this property, use the setMode() method. widthsetMode()index A zero-based integer that specifies the index of the camera, as reflected in the array returned by the names property.int A zero-based integer that specifies the index of the camera, as reflected in the array returned by the names property. namesgetCamera()isSupported The isSupported property is set to true if the Camera class is supported on the current platform, otherwise it is set to false.Boolean The isSupported property is set to true if the Camera class is supported on the current platform, otherwise it is set to false. keyFrameInterval The number of video frames transmitted in full (called keyframes) instead of being interpolated by the video compression algorithm.int The number of video frames transmitted in full (called keyframes) instead of being interpolated by the video compression algorithm. The default value is 15, which means that every 15th frame is a keyframe. A value of 1 means that every frame is a keyframe. The allowed values are 1 through 48. setKeyFrameInterval()loopback Indicates whether a local view of what the camera is capturing is compressed and decompressed (true), as it would be for live transmission using Flash Media Server, or uncompressed (false).Boolean Indicates whether a local view of what the camera is capturing is compressed and decompressed (true), as it would be for live transmission using Flash Media Server, or uncompressed (false). The default value is false.

Although a compressed stream is useful for testing, such as when previewing video quality settings, it has a significant processing cost. The local view is compressed, edited for transmission as it would be over a live connection, and then decompressed for local viewing.

To set this value, use Camera.setLoopback(). To set the amount of compression used when this property is true, use Camera.setQuality().

setLoopback()setQuality()
motionLevel The amount of motion required to invoke the activity event.int The amount of motion required to invoke the activity event. Acceptable values range from 0 to 100. The default value is 50.

Video can be displayed regardless of the value of the motionLevel property. For more information, see setMotionLevel().

setMotionLevel()
motionTimeout The number of milliseconds between the time the camera stops detecting motion and the time the activity event is invoked.int The number of milliseconds between the time the camera stops detecting motion and the time the activity event is invoked. The default value is 2000 (2 seconds).

To set this value, use setMotionLevel().

setMotionLevel()
muted A Boolean value indicating whether the user has denied access to the camera (true) or allowed access (false) in the Flash Player Privacy dialog box.Boolean A Boolean value indicating whether the user has denied access to the camera (true) or allowed access (false) in the Flash Player Privacy dialog box. When this value changes, the statusevent is dispatched. getCamera()statusname The name of the current camera, as returned by the camera hardware.String The name of the current camera, as returned by the camera hardware. namesgetCamera()names An array of strings indicating the names of all available cameras without displaying the Flash Player Privacy dialog box.Array An array of strings indicating the names of all available cameras without displaying the Flash Player Privacy dialog box. This array behaves in the same way as any other ActionScript array, implicitly providing the zero-based index of each camera and the number of cameras on the system (by means of names.length). For more information, see the names Array class entry.

Calling the names property requires an extensive examination of the hardware. In most cases, you can just use the default camera.

On Android, only one camera is supported, even if the device has more than one camera devices. The name of the camera is always, "Default."

getCamera()indexname
quality The required level of picture quality, as determined by the amount of compression being applied to each video frame.int The required level of picture quality, as determined by the amount of compression being applied to each video frame. Acceptable quality values range from 1 (lowest quality, maximum compression) to 100 (highest quality, no compression). The default value is 0, which means that picture quality can vary as needed to avoid exceeding available bandwidth.

To set this property, use the setQuality() method.

setQuality()
width The current capture width, in pixels.int The current capture width, in pixels. To set a desired value for this property, use the setMode() method. setMode()
SoundTransform The SoundTransform class contains properties for volume and panning.Object The SoundTransform class contains properties for volume and panning. The following example loads and plays an MP3 file. As the MP3 file plays, move the mouse or other user input device; the volume and panning change as you move the user input device over the Stage. To run this example, place a file named MySound.mp3 in the same directory as your SWF file. package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.*; import flash.media.Sound; import flash.media.SoundChannel; import flash.media.SoundTransform; import flash.net.URLRequest; import flash.utils.Timer; public class SoundTransformExample extends Sprite { private var url:String = "MySound.mp3"; private var soundFactory:Sound; private var channel:SoundChannel; private var positionTimer:Timer; public function SoundTransformExample() { stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; var request:URLRequest = new URLRequest(url); soundFactory = new Sound(); soundFactory.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); soundFactory.load(request); channel = soundFactory.play(); stage.addEventListener(MouseEvent.MOUSE_MOVE, mouseMoveHandler); } private function ioErrorHandler(event:Event):void { trace("ioErrorHandler: " + event); } private function setPan(pan:Number):void { trace("setPan: " + pan.toFixed(2)); var transform:SoundTransform = channel.soundTransform; transform.pan = pan; channel.soundTransform = transform; } private function setVolume(volume:Number):void { trace("setVolume: " + volume.toFixed(2)); var transform:SoundTransform = channel.soundTransform; transform.volume = volume; channel.soundTransform = transform; } private function mouseMoveHandler(event:MouseEvent):void { var halfStage:uint = Math.floor(stage.stageWidth / 2); var xPos:uint = event.stageX; var yPos:uint = event.stageY; var value:Number; var pan:Number; if (xPos > halfStage) { value = xPos / halfStage; pan = value - 1; } else if (xPos < halfStage) { value = (xPos - halfStage) / halfStage; pan = value; } else { pan = 0; } var volume:Number = 1 - (yPos / stage.stageHeight); setVolume(volume); setPan(pan); } } } flash.display.SimpleButton.soundTransformflash.display.Sprite.soundTransformflash.media.Microphone.soundTransformflash.media.SoundChannel.soundTransformflash.media.SoundMixer.soundTransformflash.net.NetStream.soundTransformSoundTransform Creates a SoundTransform object.volNumber1The volume, ranging from 0 (silent) to 1 (full volume). panningNumber0The left-to-right panning of the sound, ranging from -1 (full pan left) to 1 (full pan right). A value of 0 represents no panning (center). Creates a SoundTransform object. In the following example, the sound plays only from the right channel, and the volume is set to 50 percent.

In the constructor, the sound is loaded and is assigned to a sound channel (channel). A SoundTranform object (transform) is also created. Its first argument sets the volume at 50 percent (the range is 0.0 to 1.0). Its second argument sets the panning. In this example, panning is set to 1.0, which means the sound comes from the right speaker only. In order for these settings to take effect, the transform SoundTranform object is assigned to the sound channel's souundTransform property.

Note: There is limited error handling written for this example.

package { import flash.display.Sprite; import flash.net.URLRequest; import flash.media.Sound; import flash.media.SoundChannel; import flash.media.SoundTransform; import flash.events.IOErrorEvent; public class SoundTransform_constructorExample extends Sprite { public function SoundTransform_constructorExample() { var mySound:Sound = new Sound(); var url:URLRequest = new URLRequest("mySound.mp3"); var channel:SoundChannel; var transform:SoundTransform = new SoundTransform(0.5, 1.0); mySound.load(url); channel = mySound.play(); channel.soundTransform = transform; mySound.addEventListener(IOErrorEvent.IO_ERROR, errorHandler); } private function errorHandler(errorEvent:IOErrorEvent):void { trace("The sound could not be loaded: " + errorEvent.text); } } }
leftToLeft A value, from 0 (none) to 1 (all), specifying how much of the left input is played in the left speaker.Number A value, from 0 (none) to 1 (all), specifying how much of the left input is played in the left speaker. leftToRight A value, from 0 (none) to 1 (all), specifying how much of the left input is played in the right speaker.Number A value, from 0 (none) to 1 (all), specifying how much of the left input is played in the right speaker. pan The left-to-right panning of the sound, ranging from -1 (full pan left) to 1 (full pan right).Number The left-to-right panning of the sound, ranging from -1 (full pan left) to 1 (full pan right). A value of 0 represents no panning (balanced center between right and left). rightToLeft A value, from 0 (none) to 1 (all), specifying how much of the right input is played in the left speaker.Number A value, from 0 (none) to 1 (all), specifying how much of the right input is played in the left speaker. rightToRight A value, from 0 (none) to 1 (all), specifying how much of the right input is played in the right speaker.Number A value, from 0 (none) to 1 (all), specifying how much of the right input is played in the right speaker. volume The volume, ranging from 0 (silent) to 1 (full volume).Number The volume, ranging from 0 (silent) to 1 (full volume).
StageVideoAvailability This class defines an enumeration that indicates whether stage video is currently available.An enumeration that indicates whether stage video is currently available. Object This class defines an enumeration that indicates whether stage video is currently available. flash.events.StageVideoAvailabilityEventAVAILABLE Stage video is currently available.availableStringStage video is currently available. Stage video is currently available. UNAVAILABLE Stage video is not currently available.unavailableStringStage video is not currently available. Stage video is not currently available. StageWebView The StageWebView class displays HTML content in a stage view port.flash.events:EventDispatcher The StageWebView class displays HTML content in a stage view port.

The StageWebView class provides a simple means to display HTML content on devices where the HTMLLoader class is not supported. The class provides no interaction between ActionScript and the HTML content except through the methods and properties of the StageWebView class itself. There is, for example, no way to pass values or call functions between ActionScript and JavaScript.

AIR profile support: This feature is supported on all desktop operating systems and mobile devices, but is not supported on AIR for TV devices. You can test for support at run time using the StageWebView.isSupported property. See AIR Profile Support for more information regarding API support across multiple profiles.

On devices in the mobile and extended mobile profiles, the StageWebView class uses the system web control provided by the device operating system. Thus, the available features and rendering appearance may vary from device to device. On desktop computers (in the desktop and extended desktop profiles), the StageWebView class uses the internal AIR WebKit engine. The features available and rendering appearance are the same as those of the HTMLLoader class (without the close integration and script bridging between ActionScript and JavaScript provided by an HTMLLoader instance). Test the isSupported property of the StageWebView class to determine whether the class is supported on the current device.

The StageWebView class is NOT a display object and cannot be added to the Flash display list. Instead you display a StageWebView object by attaching it directly to a stage using the stage property. The StageWebView instance attached to a stage is displayed on top of any Flash display objects. You control the size and position of the rendering area with the viewPort property. There is no way to control the depth ordering of different StageWebView objects. Overlapping two instances is not recommended.

When the content within the StageWebView object has focus, the StageWebView object has the first opportunity to handle keyboard input. The stage to which the StageWebView object is attached dispatches any keyboard input that is not handled. The normal event capture/bubble cycle does not apply here since the StageWebView instance is not part of the display list.

In Android 3.0+, an application must enable hardware acceleration in the Android manifestAdditions element of the AIR application descriptor to display plug-in content in a StageWebView object.

The following example sets up a StageWebView object to fill the stage. The example loads a web site with the loadURL() method and uses the device Back and Search softkeys for history navigation. package { import flash.display.MovieClip; import flash.media.StageWebView; import flash.geom.Rectangle; import flash.events.KeyboardEvent; import flash.ui.Keyboard; import flash.desktop.NativeApplication; public class StageWebViewExample extends MovieClip{ private var webView:StageWebView = new StageWebView(); public function StageWebViewExample() { webView.stage = this.stage; webView.viewPort = new Rectangle( 0, 0, stage.stageWidth, stage.stageHeight ); webView.loadURL( "http://www.example.com" ); stage.addEventListener( KeyboardEvent.KEY_DOWN, onKey ); } private function onKey( event:KeyboardEvent ):void { if( event.keyCode == Keyboard.BACK && webView.isHistoryBackEnabled ) { trace("Back."); webView.historyBack(); event.preventDefault(); } if( event.keyCode == Keyboard.SEARCH && webView.isHistoryForwardEnabled ) { trace("Forward."); webView.historyForward(); } } } }
HTMLLoader classMark Doherty: AIR on Android: TweetrAppMark Doherty: OAuth SupportEnabling Flash Player and other plug-ins in a StageWebView objectfocusOut Dispatched when the StageWebView relinquishes focus.flash.events.FocusEvent Dispatched when the StageWebView relinquishes focus. focusIn Dispatched when this StageWebView object receives focus.flash.events.FocusEvent Dispatched when this StageWebView object receives focus. error Signals that an error has occurred.flash.events.ErrorEvent Signals that an error has occurred. complete Signals that the last load operation requested by loadString() or loadURL() method has completed.flash.events.Event.COMPLETEflash.events.EventSignals that the last load operation requested by loadString() or load() method has completed. Signals that the last load operation requested by loadString() or loadURL() method has completed. locationChanging Signals that the location property of the StageWebView object is about to change.flash.events.LocationChangeEvent.LOCATION_CHANGINGflash.events.LocationChangeEventSignals that the location property of the StageWebView object is about to change. Signals that the location property of the StageWebView object is about to change.

A locationChanging event is only dispatched when the location change is initiated through HTML content or code running inside the StageWebView object,such as when a user clicks a link. By default, the new location is displayed in this StageWebView object. You can call the preventDefault() method of the event object to cancel the default behavior. For example, you could use the flash.net.navigateToURL() function to open the page in the system browser based on the location property of the event object

A locationChanging event is not dispatched when you change the location with the following methods:

  • historyBack()
  • historyForward()
  • historyGo()
  • loadString()
  • loadURL()
  • reload()
locationChange Signals that the location property of the StageWebView object has changed.flash.events.LocationChangeEvent.LOCATION_CHANGEflash.events.LocationChangeEventSignals that the location property of the StageWebView object has changed. Signals that the location property of the StageWebView object has changed.

The event cannot be canceled.

StageWebView Creates a StageWebView object. Creates a StageWebView object.

The object is invisible until it is attached to a stage and until the viewPort is set.

assignFocus Assigns focus to the content within this StageWebView object.directionStringnonespecifies whether the first or last focusable object in the displayed content should receive focus. Assigns focus to the content within this StageWebView object.

Direction values are defined in FocusDirection class and include: "bottom", "none", and "top".

FocusDirection
dispose Disposes of this StageWebView object. Disposes of this StageWebView object.

Calling dispose() is optional. If you do not maintain a reference to this StageWebView instance it will be eligible for garbage collection. Calling dispose() can make garbage collection occur sooner, or at a more convenient time.

drawViewPortToBitmapData Draws the StageWebView's view port to a bitmap. The bitmap's width or height is different from view port's width or height. ArgumentErrorArgumentErrorThe bitmap is null. ErrorErrorbitmapflash.display:BitmapDataThe BitmapData object on which to draw the visible portion of the StageWebView's view port. Draws the StageWebView's view port to a bitmap.

Capture the bitmap and set the stage to null for displaying the content above the StageWebView object.

The following example displays two labels: google and facebook. Clicking on the label captures the corresponding web page and displays it as a snapshot on the stage. package { import flash.display.Bitmap; import flash.display.BitmapData; import flash.display.Sprite; import flash.events.*; import flash.geom.Rectangle; import flash.media.StageWebView; import flash.net.*; import flash.text.TextField; public class stagewebview1 extends Sprite { public var webView:StageWebView = new StageWebView(); public var textGoogle:TextField=new TextField(); public var textFacebook:TextField=new TextField(); public function stagewebview() { textGoogle.htmlText="<b>Google</b>"; textGoogle.x=300; textGoogle.y=-80; addChild(textGoogle); textFacebook.htmlText="<b>Facebook</b>"; textFacebook.x=0; textFacebook.y=-80; addChild(textFacebook); textGoogle.addEventListener(MouseEvent.CLICK,goGoogle); textFacebook.addEventListener(MouseEvent.CLICK,goFaceBook); webView.stage = this.stage; webView.viewPort = new Rectangle(0, 0, stage.stageWidth, stage.stageHeight); } public function goGoogle(e:Event):void { webView.loadURL("http://www.google.com"); webView.stage = null; webView.addEventListener(Event.COMPLETE,handleLoad); } public function goFaceBook(e:Event):void { webView.loadURL("http://www.facebook.com"); webView.stage = null; webView.addEventListener(Event.COMPLETE,handleLoad); } public function handleLoad(e:Event):void { var bitmapData:BitmapData = new BitmapData(webView.viewPort.width, webView.viewPort.height); webView.drawViewPortToBitmapData(bitmapData); var webViewBitmap:Bitmap=new Bitmap(bitmapData); addChild(webViewBitmap); } } }
historyBack Navigates to the previous page in the browsing history. Navigates to the previous page in the browsing history. historyForward Navigates to the next page in the browsing history. Navigates to the next page in the browsing history. loadString Loads and displays the specified HTML string.textStringthe string of HTML or XHTML content to display. mimeTypeStringtext/htmlThe MIME type of the content, either "text/html" or "application/xhtml+xml". Loads and displays the specified HTML string.

When the loadString() method is used, the location is reported as "about:blank." Only standard URI schemes can be used in URLs within the HTML string. The AIR URI schemes, "app:" and "app-storage:" are not allowed.

The HTML content cannot load local resources, such as image files. XMLHttpRequests are not allowed.

Only the "text/html" and "application/xhtml+xml" MIME types are supported.

The following example sets up a StageWebView object to fill the stage. The example loads an HTML page with the loadString() method. var webView:StageWebView = new StageWebView(); webView.stage = this.stage; webView.viewPort = new Rectangle( 0, 0, stage.stageWidth, stage.stageHeight ); var htmlString:String = "<!DOCTYPE HTML>" + "<html>" + "<body>" + "<h1>Example</h1>" + "<p>King Phillip cut open five green snakes.</p>" + "</body>" + "</html>"; webView.loadString( htmlString, "text/html" );
loadURL Loads the page at the specified URL.urlString Loads the page at the specified URL.

The URL can use the following URI schemes: http:, https:, file:, data:, and javascript:. Content loaded with the file: scheme can load other local resources.

The following example sets up a StageWebView object to fill the stage. The example loads a web site with the loadURL() method.

Note: On Android, you must specify the INTERNET permission in your AIR application descriptor to load remote URLs.

var webView:StageWebView = new StageWebView(); webView.stage = this.stage; webView.viewPort = new Rectangle( 0, 0, stage.stageWidth, stage.stageHeight ); webView.loadURL( "http://www.example.com" );
reload Reloads the current page. Reloads the current page. stop Halts the current load operation. Halts the current load operation. isHistoryBackEnabled Reports whether there is a previous page in the browsing history.Boolean Reports whether there is a previous page in the browsing history. isHistoryForwardEnabled Reports whether there is a next page in the browsing history.Boolean Reports whether there is a next page in the browsing history. isSupported Reports whether the StageWebView class is supported on the current device.Boolean Reports whether the StageWebView class is supported on the current device. location The URL of the current location.String The URL of the current location. stage The stage on which this StageWebView object is displayed.flash.display:Stage The stage on which this StageWebView object is displayed.

Set stage to null to hide this StageWebView object.

title The HTML title value.String The HTML title value. viewPort The area on the stage in which the StageWebView object is displayed.flash.geom:RectangleThe Rectangle value is not valid. RangeErrorRangeError The area on the stage in which the StageWebView object is displayed.
VideoStatus This class defines an enumeration that describes possible levels of video decoding.An enumeration that describes possible levels of video decoding. Object This class defines an enumeration that describes possible levels of video decoding. ACCELERATED Indicates hardware-accelerated (GPU) video decoding.acceleratedStringIndicates hardware-accelerated (GPU) video decoding. Indicates hardware-accelerated (GPU) video decoding. SOFTWARE Indicates software video decoding.softwareStringIndicates software video decoding. Indicates software video decoding. UNAVAILABLE Video decoding is not supported.unavailableStringVideo is not supported. Video decoding is not supported. MediaPromise The MediaPromise class represents the promise to deliver a media object.flash.desktop:IFilePromiseflash.events:EventDispatcher The MediaPromise class represents the promise to deliver a media object.

The data property of a MediaEvent object is a MediaPromise instance. You can use the MediaPromise methods to access the promised media object. Supported media formats include still images and video.

You cannot create a MediaPromise object. Calling new MediaPromise() generates a run-time error.

MediaEventIFilePromiseLoader.LoadFilePromise()IDataInputCameraRoll.browseForImage()CameraUIcomplete A MediaPromise object dispatches a complete event when all data has been read.flash.events.Event.COMPLETEflash.events.Event A MediaPromise object dispatches a complete event when all data has been read. The event indicates that there is no more data available in the underlying stream.

A complete event is not dispatched by a synchronous data source.

progress A MediaPromise object dispatches progress events as the data becomes available.flash.events.ProgressEvent.PROGRESSflash.events.ProgressEvent A MediaPromise object dispatches progress events as the data becomes available.

The bytesTotal property of all progress events except the last has the value 0. If all the data is available immediately, no progress events may be dispatched. No progress events are dispatched by synchronous data sources.

ioError A MediaPromise object dispatches an ioError event when an error is encountered while reading the underlying data stream.flash.events.IOErrorEvent.IOERRORflash.events.IOErrorEvent A MediaPromise object dispatches an ioError event when an error is encountered while reading the underlying data stream. No more data can be read after this event is dispatched. close A MediaPromise object dispatches a close event when the underlying data stream has closed.flash.events.Event.CLOSEflash.events.Event A MediaPromise object dispatches a close event when the underlying data stream has closed. close Closes the data source. Closes the data source. open Opens the underlying data source and returns the IDataInput instance allowing you to read it.flash.utils:IDataInput Opens the underlying data source and returns the IDataInput instance allowing you to read it.

If the underlying data source is asynchronous, then the MediaPromise object dispatches progress and complete events to indicate whether data is available to be read. If the data source is synchronous, all data is available immediately and these events are not dispatched.

Note: You can load a MediaPromise object using the loadFilePromise() method of the Loader class instead of reading the data manually.

Loader.loadFilePromise()
reportError Used by the runtime to report errors.eflash.events:ErrorEventthe error vent to dispatch. Used by the runtime to report errors.

Application code should not call this method.

file The File instance representing the media object, if one exists.flash.filesystem:File The File instance representing the media object, if one exists.

This property references a File object if the underlying data source is file-based and the file is accessible to your application. Otherwise, the property is null.

isAsync Reports whether the underlying data source is asynchronous or synchronous.Boolean Reports whether the underlying data source is asynchronous or synchronous.

mediaType The general type of media, either image or video.String The general type of media, either image or video.

The constants in the MediaType class define possible values of this property:

  • MediaType.IMAGE
  • MediaType.VIDEO
MediaType
relativePath The file name of the media object, if one exists.String The file name of the media object, if one exists.

A file name is available if the underlying data source is file-based and the file is accessible to your application. Otherwise, the property is null.

SoundLoaderContext The SoundLoaderContext class provides security checks for files that load sound.Object The SoundLoaderContext class provides security checks for files that load sound. SoundLoaderContext objects are passed as an argument to the constructor and the load() method of the Sound class.

When you use this class, consider the following security model:

  • Loading and playing a sound is not allowed if the calling file is in a network sandbox and the sound file to be loaded is local.
  • By default, loading and playing a sound is not allowed if the calling is local and tries to load and play a remote sound. A user must grant explicit permission to allow this.
  • Certain operations dealing with sound are restricted. The data in a loaded sound cannot be accessed by a file in a different domain unless you implement a URL policy file. Sound-related APIs that fall under this restriction are the Sound.id3 property and the SoundMixer.computeSpectrum(), SoundMixer.bufferTime, and SoundTransform() methods.

However, in Adobe AIR, content in the application security sandbox (content installed with the AIR application) are not restricted by these security limitations.

For more information related to security, see the Flash Player Developer Center Topic: Security.

SoundLoaderContext Creates a new sound loader context object.bufferTimeNumber1000The number of seconds to preload a streaming sound into a buffer before the sound starts to stream. checkPolicyFileBooleanfalseSpecifies whether the existence of a URL policy file should be checked upon loading the object (true) or not. Creates a new sound loader context object. In the following example, the buffer for the sound that will be loaded is set to three seconds.

The first parameter of a SoundLoaderContext object (context) is used to increase the default buffer value of one second to three seconds. (The value is in milliseconds.) If the second parameter of the SoundLoaderContext object is set to true, Flash Player will check for a cross-domain policy file upon loading the object. Here it is set to the default value false, so no policy file will be checked. The load() method of the sound object will use the context setting to make sure it will take three seconds to preload the streaming sound into a buffer before the sound starts to stream. The URLRequest object determines the location of the file, which is a podcast from Adobe. If an IOErrorEvent.IO_ERROR error occurs during the loading of the sound file, the errorHandler() method is invoked.

package { import flash.display.Sprite; import flash.net.URLRequest; import flash.media.Sound; import flash.media.SoundLoaderContext; import flash.events.IOErrorEvent; public class SoundLoaderContextExample extends Sprite { public function SoundLoaderContextExample() { var snd:Sound = new Sound(); var req:URLRequest = new URLRequest("http://av.adobe.com/podcast/csbu_dev_podcast_epi_2.mp3"); var context:SoundLoaderContext = new SoundLoaderContext(3000, false); snd.load(req, context); snd.play(); snd.addEventListener(IOErrorEvent.IO_ERROR, errorHandler); } private function errorHandler(errorEvent:IOErrorEvent):void { trace("The sound could not be loaded: " + errorEvent.text); } } }
bufferTime The number of milliseconds to preload a streaming sound into a buffer before the sound starts to stream.1000Number The number of milliseconds to preload a streaming sound into a buffer before the sound starts to stream.

Note that you cannot override the value of SoundLoaderContext.bufferTime by setting the global SoundMixer.bufferTime property. The SoundMixer.bufferTime property affects the buffer time for embedded streaming sounds in a SWF file and is independent of dynamically created Sound objects (that is, Sound objects created in ActionScript).

checkPolicyFile Specifies whether the application should try to download a URL policy file from the loaded sound's server before beginning to load the sound.falseBoolean Specifies whether the application should try to download a URL policy file from the loaded sound's server before beginning to load the sound. This property applies to sound that is loaded from outside the calling file's own domain using the Sound.load() method.

Set this property to true when you load a sound from outside the calling file's own domain and code in the calling file needs low-level access to the sound's data. Examples of low-level access to a sound's data include referencing the Sound.id3 property to get an ID3Info object or calling the SoundMixer.computeSpectrum() method to get sound samples from the loaded sound. If you try to access sound data without setting the checkPolicyFile property to true at loading time, you may get a SecurityError exception because the required policy file has not been downloaded.

If you don't need low-level access to the sound data that you are loading, avoid setting checkPolicyFile to true. Checking for a policy file consumes network bandwidth and might delay the start of your download, so it should only be done when necessary.

When you call Sound.load() with SoundLoaderContext.checkPolicyFile set to true, Flash Player or AIR must either successfully download a relevant URL policy file or determine that no such policy file exists before it begins downloading the specified sound. Flash Player or AIR performs the following actions, in this order, to verify the existence of a policy file:

  • Flash Player or AIR considers policy files that have already been downloaded.
  • Flash Player or AIR tries to download any pending policy files specified in calls to Security.loadPolicyFile().
  • Flash Player or AIR tries to download a policy file from the default location that corresponds to the sound's URL, which is /crossdomain.xml on the same server as URLRequest.url. (The sound's URL is specified in the url property of the URLRequest object passed to Sound.load() or the Sound() constructor function.)

In all cases, Flash Player or AIR requires that an appropriate policy file exist on the sound's server, that it provide access to the sound file at URLRequest.url by virtue of the policy file's location, and that it allow the domain of the calling file to access the sound, through one or more <allow-access-from> tags.

If you set checkPolicyFile to true, Flash Player or AIR waits until the policy file is verified before loading the sound. You should wait to perform any low-level operations on the sound data, such as calling Sound.id3 or SoundMixer.computeSpectrum(), until progress and complete events are dispatched from the Sound object.

If you set checkPolicyFile to true but no appropriate policy file is found, you will not receive an error until you perform an operation that requires a policy file, and then Flash Player or AIR throws a SecurityError exception. After you receive a complete event, you can test whether a relevant policy file was found by getting the value of Sound.id3 within a try block and seeing if a SecurityError is thrown.

Be careful with checkPolicyFile if you are downloading sound from a URL that uses server-side HTTP redirects. Flash Player or AIR tries to retrieve policy files that correspond to the url property of the URLRequest object passed to Sound.load(). If the final sound file comes from a different URL because of HTTP redirects, then the initially downloaded policy files might not be applicable to the sound's final URL, which is the URL that matters in security decisions.

If you find yourself in this situation, here is one possible solution. After you receive a progress or complete event, you can examine the value of the Sound.url property, which contains the sound's final URL. Then call the Security.loadPolicyFile() method with a policy file URL that you calculate based on the sound's final URL. Finally, poll the value of Sound.id3 until no exception is thrown.

This does not apply to content in the AIR application sandbox. Content in the application sandbox always has programatic access to sound content, regardless of its origin.

For more information related to security, see the Flash Player Developer Center Topic: Security.

flash.media.Sound.load()flash.media.Sound.id3flash.media.SoundMixer.computeSpectrum()flash.media.Sound.urlflash.system.Security.loadPolicyFile()
SoundChannel The SoundChannel class controls a sound in an application.flash.events:EventDispatcher The SoundChannel class controls a sound in an application. Every sound is assigned to a sound channel, and the application can have multiple sound channels that are mixed together. The SoundChannel class contains a stop() method, properties for monitoring the amplitude (volume) of the channel, and a property for assigning a SoundTransform object to the channel. The following example loads an MP3 file, plays it, and displays information about sound events that take place as the MP3 file is loaded and played. A Timer object provides updated information about the position of the playhead every 50 milliseconds. To run this example, place a file named MySound.mp3 in the same directory as your SWF file. package { import flash.display.Sprite; import flash.events.*; import flash.media.Sound; import flash.media.SoundChannel; import flash.net.URLRequest; import flash.utils.Timer; public class SoundChannelExample extends Sprite { private var url:String = "MySound.mp3"; private var soundFactory:Sound; private var channel:SoundChannel; private var positionTimer:Timer; public function SoundChannelExample() { var request:URLRequest = new URLRequest(url); soundFactory = new Sound(); soundFactory.addEventListener(Event.COMPLETE, completeHandler); soundFactory.addEventListener(Event.ID3, id3Handler); soundFactory.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); soundFactory.addEventListener(ProgressEvent.PROGRESS, progressHandler); soundFactory.load(request); channel = soundFactory.play(); channel.addEventListener(Event.SOUND_COMPLETE, soundCompleteHandler); positionTimer = new Timer(50); positionTimer.addEventListener(TimerEvent.TIMER, positionTimerHandler); positionTimer.start(); } private function positionTimerHandler(event:TimerEvent):void { trace("positionTimerHandler: " + channel.position.toFixed(2)); } private function completeHandler(event:Event):void { trace("completeHandler: " + event); } private function id3Handler(event:Event):void { trace("id3Handler: " + event); } private function ioErrorHandler(event:Event):void { trace("ioErrorHandler: " + event); positionTimer.stop(); } private function progressHandler(event:ProgressEvent):void { trace("progressHandler: " + event); } private function soundCompleteHandler(event:Event):void { trace("soundCompleteHandler: " + event); positionTimer.stop(); } } } SoundSoundTransformsoundComplete Dispatched when a sound has finished playing.flash.events.Event.SOUND_COMPLETEflash.events.Event Dispatched when a sound has finished playing. In the following example, the user selects songs from a playlist, and then selects Play to play the song in the order selected.

In the constructor, a text field is defined that holds the song list and a line for the selection to play. (Usually, buttons are used for play and list boxes for a song list.) A text format object is defined that changes the format of the song lines to italic after they are selected. When a user selects the text field, the clickHandler() method is invoked.

In the clickHandler() method, the getLineIndexAtPoint() method of the text field object returns the index of the line that the user selected. Using the line index, the getLineText() method gets the content of the text. The if statement checks whether the user selected to play a song or add a song to the play list. If a user selected to play and a song has been selected, then the event listener for mouse click is removed and the playNext() method is called to begin playing the songs. If the user selected a song title, the content of the line is added to the songList array and the format of the line is set to italic.

The playNext() method iterates through the array list to load and play each song. The song is also assigned to a sound channel. An event listener for the sound channel is added to respond when the song finishes playing and the Event.SOUND_COMPLETE event is dispatched. The soundCompleteHandler() method then invokes the playNext() method to play the next song. This process continues until all the songs listed in the array finish playing.

package { import flash.display.Sprite; import flash.media.Sound; import flash.media.SoundChannel; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.events.MouseEvent; import flash.text.TextFormat; import flash.net.URLRequest; import flash.events.Event; import flash.events.IOErrorEvent; public class SoundChannel_event_soundCompleteExample extends Sprite { private var channel:SoundChannel = new SoundChannel(); private var songList:Array = new Array(); private var listTextField:TextField = new TextField(); private var songFormat:TextFormat = new TextFormat(); private var arrayIndex:int = 0; private var songSelected:Boolean = false; public function SoundChannel_event_soundCompleteExample() { listTextField.autoSize = TextFieldAutoSize.LEFT; listTextField.border = true listTextField.background = true; listTextField.text = "Song1.mp3\n" + "Song2.mp3\n" + "Song3.mp3\n" + "Song4.mp3\n" + "PLAY"; songFormat.italic = true; listTextField.addEventListener(MouseEvent.CLICK, clickHandler); addChild(listTextField); } private function clickHandler(e:MouseEvent):void { var index:int = listTextField.getLineIndexAtPoint(e.localX, e.localY); var line:String = listTextField.getLineText(index); var firstIndex:uint = listTextField.getLineOffset(index); var playLine:uint = listTextField.numLines - 1; if((index == playLine) && (songSelected == true)) { listTextField.removeEventListener(MouseEvent.CLICK, clickHandler); playNext(); } else if (index != playLine) { songList.push(line.substr(0, (line.length - 1))); listTextField.setTextFormat(songFormat, firstIndex, (firstIndex + listTextField.getLineLength(index))); songSelected = true; } } private function playNext():void { if(arrayIndex < songList.length) { var snd:Sound = new Sound(); snd.load(new URLRequest(songList[arrayIndex])); channel = snd.play(); channel.addEventListener(Event.SOUND_COMPLETE, soundCompleteHandler); arrayIndex++; } else { songSelected = false; while(arrayIndex > 0) { songList.pop(); arrayIndex--; } } } private function soundCompleteHandler(e:Event):void { playNext(); } private function errorHandler(errorEvent:IOErrorEvent):void { trace(errorEvent.text); } } }
stop Stops the sound playing in the channel. Stops the sound playing in the channel. In the following example, the user can pause and replay a sound file.

In the constructor, the sound file is loaded. (This example assumes that the file is in the same directory as the SWF file.) A text field is used as a button for the user to play or pause the sound. When the user selects the button text field, the clickHandler() method is invoked.

In the clickHandler() method, the first time the user selects the text field, the sound is set to play and is assigned to a sound channel. Next, when the user selects the text field to pause, the sound stops playing. The sound channel's position property records the position of the sound at the time it was stopped. This property is used to resume the sound starting at that position, after the user selects the text field to start playing again. Each time the Sound.play() method is called, a new SoundChannel object is created and assigned to the channel variable. The Sound object must be assigned to a SoundChannel object in order to use the sound channel's stop() method to pause the sound.

package { import flash.display.Sprite; import flash.media.Sound; import flash.media.SoundChannel; import flash.net.URLLoader; import flash.net.URLRequest; import flash.text.TextField; import flash.events.MouseEvent; import flash.text.TextFieldAutoSize; public class SoundChannel_stopExample extends Sprite { private var snd:Sound = new Sound(); private var channel:SoundChannel = new SoundChannel(); private var button:TextField = new TextField(); public function SoundChannel_stopExample() { var req:URLRequest = new URLRequest("MySound.mp3"); snd.load(req); button.x = 10; button.y = 10; button.text = "PLAY"; button.border = true; button.background = true; button.selectable = false; button.autoSize = TextFieldAutoSize.CENTER; button.addEventListener(MouseEvent.CLICK, clickHandler); this.addChild(button); } private function clickHandler(e:MouseEvent):void { var pausePosition:int = channel.position; if(button.text == "PLAY") { channel = snd.play(pausePosition); button.text = "PAUSE"; } else { channel.stop(); button.text = "PLAY"; } } } }
leftPeak The current amplitude (volume) of the left channel, from 0 (silent) to 1 (full amplitude).Number The current amplitude (volume) of the left channel, from 0 (silent) to 1 (full amplitude). position When the sound is playing, the position property indicates in milliseconds the current point that is being played in the sound file.Number When the sound is playing, the position property indicates in milliseconds the current point that is being played in the sound file. When the sound is stopped or paused, the position property indicates the last point that was played in the sound file.

A common use case is to save the value of the position property when the sound is stopped. You can resume the sound later by restarting it from that saved position.

If the sound is looped, position is reset to 0 at the beginning of each loop.

rightPeak The current amplitude (volume) of the right channel, from 0 (silent) to 1 (full amplitude).Number The current amplitude (volume) of the right channel, from 0 (silent) to 1 (full amplitude). soundTransform The SoundTransform object assigned to the sound channel.flash.media:SoundTransform The SoundTransform object assigned to the sound channel. A SoundTransform object includes properties for setting volume, panning, left speaker assignment, and right speaker assignment. SoundTransform
SoundMixer The SoundMixer class contains static properties and methods for global sound control in the application.Object The SoundMixer class contains static properties and methods for global sound control in the application. The SoundMixer class controls embedded and streaming sounds in the application. it does not control dynamically created sounds (that is, sounds generated in response to a Sound object dispatching a sampleData event). areSoundsInaccessible Determines whether any sounds are not accessible due to security restrictions.The string representation of the boolean. Boolean Determines whether any sounds are not accessible due to security restrictions. For example, a sound loaded from a domain other than that of the content calling this method is not accessible if the server for the sound has no URL policy file that grants access to the domain of that domain. The sound can still be loaded and played, but low-level operations, such as getting ID3 metadata for the sound, cannot be performed on inaccessible sounds.

For AIR application content in the application security sandbox, calling this method always returns false. All sounds, including those loaded from other domains, are accessible to content in the application security sandbox.

computeSpectrum()
computeSpectrum Takes a snapshot of the current sound wave and places it into the specified ByteArray object.outputArrayflash.utils:ByteArrayA ByteArray object that holds the values associated with the sound. If any sounds are not available due to security restrictions (areSoundsInaccessible == true), the outputArray object is left unchanged. If all sounds are stopped, the outputArray object is filled with zeros. FFTModeBooleanfalseA Boolean value indicating whether a Fourier transformation is performed on the sound data first. Setting this parameter to true causes the method to return a frequency spectrum instead of the raw sound wave. In the frequency spectrum, low frequencies are represented on the left and high frequencies are on the right. stretchFactorint0The resolution of the sound samples. If you set the stretchFactor value to 0, data is sampled at 44.1 KHz; with a value of 1, data is sampled at 22.05 KHz; with a value of 2, data is sampled 11.025 KHz; and so on. Takes a snapshot of the current sound wave and places it into the specified ByteArray object. The values are formatted as normalized floating-point values, in the range -1.0 to 1.0. The ByteArray object passed to the outputArray parameter is overwritten with the new values. The size of the ByteArray object created is fixed to 512 floating-point values, where the first 256 values represent the left channel, and the second 256 values represent the right channel.

Note: This method is subject to local file security restrictions and restrictions on cross-domain loading. If you are working with local files or sounds loaded from a server in a different domain than the calling content, you might need to address sandbox restrictions through a cross-domain policy file. For more information, see the Sound class description. In addition, this method cannot be used to extract data from RTMP streams, even when it is called by content that reside in the same domain as the RTMP server.

This method is supported over RTMP in Flash Player 9.0.115.0 and later and in Adobe AIR. You can control access to streams on Flash Media Server in a server-side script. For more information, see the Client.audioSampleAccess and Client.videoSampleAccess properties in Server-Side ActionScript Language Reference for Adobe Flash Media Server.

In the following example, the computeSpectrum() method is used to produce a graphic representation of the sound wave data.

In the constructor, a sound file is loaded and set to play. (There is no error handling in this example and it is assumed that the sound file is in the same directory as the SWF file.) The example listens for the Event.ENTER_FRAME event while the sound plays, repeatedly triggering the onEnterFrame() method to draw a graph of the sound data values. When the sound finishes playing the onPlaybackComplete() method stops the drawing process by removing the listener for the Event.ENTER_FRAME event.

In the onEnterFrame() method, the computeSpectrum() method stores the raw sound in the bytes byte array object. The data is sampled at 44.1 KHz. The byte array containing 512 bytes of data, each of which contains a floating-point value between -1 and 1. The first 256 values represent the left channel, and the second 256 values represent the right channel. The first for loop, reads the first 256 data values (the left stereo channel) and draws a line from each point to the next using the Graphics.lineTo() method. (The vector graphic display of the sound wave is written directly on to the class's sprite object.) The sound bytes are read as 32-bit floating-point number from the byte stream and multiplied by the plot height to allow for the vertical range of the graph. The width is set to twice the width of the channel length. The second for loop reads the next set of 256 values (the right stereo channel), and plots the lines in reverse order. The g.lineTo(CHANNEL_LENGTH * 2, PLOT_HEIGHT); and g.lineTo(0, PLOT_HEIGHT); methods draw the baseline for the waves. The resulting waveform plot produces a mirror-image effect.

package { import flash.display.Sprite; import flash.display.Graphics; import flash.events.Event; import flash.media.Sound; import flash.media.SoundChannel; import flash.media.SoundMixer; import flash.net.URLRequest; import flash.utils.ByteArray; import flash.text.TextField; public class SoundMixer_computeSpectrumExample extends Sprite { public function SoundMixer_computeSpectrumExample() { var snd:Sound = new Sound(); var req:URLRequest = new URLRequest("Song1.mp3"); snd.load(req); var channel:SoundChannel; channel = snd.play(); addEventListener(Event.ENTER_FRAME, onEnterFrame); channel.addEventListener(Event.SOUND_COMPLETE, onPlaybackComplete); } private function onEnterFrame(event:Event):void { var bytes:ByteArray = new ByteArray(); const PLOT_HEIGHT:int = 200; const CHANNEL_LENGTH:int = 256; SoundMixer.computeSpectrum(bytes, false, 0); var g:Graphics = this.graphics; g.clear(); g.lineStyle(0, 0x6600CC); g.beginFill(0x6600CC); g.moveTo(0, PLOT_HEIGHT); var n:Number = 0; for (var i:int = 0; i < CHANNEL_LENGTH; i++) { n = (bytes.readFloat() * PLOT_HEIGHT); g.lineTo(i * 2, PLOT_HEIGHT - n); } g.lineTo(CHANNEL_LENGTH * 2, PLOT_HEIGHT); g.endFill(); g.lineStyle(0, 0xCC0066); g.beginFill(0xCC0066, 0.5); g.moveTo(CHANNEL_LENGTH * 2, PLOT_HEIGHT); for (i = CHANNEL_LENGTH; i > 0; i--) { n = (bytes.readFloat() * PLOT_HEIGHT); g.lineTo(i * 2, PLOT_HEIGHT - n); } g.lineTo(0, PLOT_HEIGHT); g.endFill(); } private function onPlaybackComplete(event:Event):void { removeEventListener(Event.ENTER_FRAME, onEnterFrame); } } }
areSoundsInaccessible()flash.utils.ByteArrayflash.media.Soundflash.media.SoundLoaderContext.checkPolicyFile
stopAll Stops all sounds currently playing. Stops all sounds currently playing.

>In Flash Professional, this method does not stop the playhead. Sounds set to stream will resume playing as the playhead moves over the frames in which they are located.

When using this property, consider the following security model:

  • By default, calling the SoundMixer.stopAll() method stops only sounds in the same security sandbox as the object that is calling the method. Any sounds whose playback was not started from the same sandbox as the calling object are not stopped.
  • When you load the sound, using the load() method of the Sound class, you can specify a context parameter, which is a SoundLoaderContext object. If you set the checkPolicyFile property of the SoundLoaderContext object to true, Flash Player or Adobe AIR checks for a cross-domain policy file on the server from which the sound is loaded. If the server has a cross-domain policy file, and the file permits the domain of the calling content, then the file can stop the loaded sound by using the SoundMixer.stopAll() method; otherwise it cannot.

However, in Adobe AIR, content in the application security sandbox (content installed with the AIR application) are not restricted by these security limitations.

For more information related to security, see the Flash Player Developer Center Topic: Security.

In the following example, the stopAll() method is used to mute two sounds that are playing at the same time.

In the constructor, two different sound files are loaded and set to play. The first sound is loaded locally and is assigned to a sound channel. (It is assumed that the file is in the same directory as the SWF file.) The second file is loaded and streamed from the Adobe site. In order to use the SoundMixer.stopAll() method, all sound must be accessible. (A SoundLoaderContext object can be used to check for the cross-domain policy file.) Each sound also has an event listener that is invoked if an IO error occurred while loading the sound file. A muteButton text field is also created. It listens for a click event, which will invoke the muteButtonClickHandler() method.

In the muteButtonClickHandler() method, if the text field content is "MUTE," the areSoundsInaccessible() method checks if the sound mixer has access to the files. If the files are accessible, the stopAll() method stops the sounds. By selecting the text field again, the first sound begins playing and the text field's content changes to "MUTE" again. This time, the stopAll() method mutes the one sound that is running. Note that sound channel stop() method can also be used to stop a specific sound assigned to the channel. (To use the channel functionally, the sound needs to be reassigned to the channel each time the play() method is invoked.)

package { import flash.display.Sprite; import flash.net.URLRequest; import flash.media.Sound; import flash.media.SoundLoaderContext; import flash.media.SoundChannel; import flash.media.SoundMixer; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.events.MouseEvent; import flash.events.IOErrorEvent; public class SoundMixer_stopAllExample extends Sprite { private var firstSound:Sound = new Sound(); private var secondSound:Sound = new Sound(); private var muteButton:TextField = new TextField(); private var channel1:SoundChannel = new SoundChannel(); public function SoundMixer_stopAllExample() { firstSound.load(new URLRequest("mySound.mp3")); secondSound.load(new URLRequest("http://av.adobe.com/podcast/csbu_dev_podcast_epi_2.mp3")); firstSound.addEventListener(IOErrorEvent.IO_ERROR, firstSoundErrorHandler); secondSound.addEventListener(IOErrorEvent.IO_ERROR, secondSoundErrorHandler); channel1 = firstSound.play(); secondSound.play(); muteButton.autoSize = TextFieldAutoSize.LEFT; muteButton.border = true; muteButton.background = true; muteButton.text = "MUTE"; muteButton.addEventListener(MouseEvent.CLICK, muteButtonClickHandler); this.addChild(muteButton); } private function muteButtonClickHandler(event:MouseEvent):void { if(muteButton.text == "MUTE") { if(SoundMixer.areSoundsInaccessible() == false) { SoundMixer.stopAll(); muteButton.text = "click to play only one of sound."; } else { muteButton.text = "The sounds are not accessible."; } } else { firstSound.play(); muteButton.text = "MUTE"; } } private function firstSoundErrorHandler(errorEvent:IOErrorEvent):void { trace(errorEvent.text); } private function secondSoundErrorHandler(errorEvent:IOErrorEvent):void { trace(errorEvent.text); } } }
bufferTime The number of seconds to preload an embedded streaming sound into a buffer before it starts to stream.int The number of seconds to preload an embedded streaming sound into a buffer before it starts to stream. The data in a loaded sound, including its buffer time, cannot be accessed by a SWF file that is in a different domain unless you implement a cross-domain policy file. For more information about security and sound, see the Sound class description. The data in a loaded sound, including its buffer time, cannot be accessed by code in a file that is in a different domain unless you implement a cross-domain policy file. However, in the application sandbox in an AIR application, code can access data in sound files from any source. For more information about security and sound, see the Sound class description.

The SoundMixer.bufferTime property only affects the buffer time for embedded streaming sounds in a SWF and is independent of dynamically created Sound objects (that is, Sound objects created in ActionScript). The value of SoundMixer.bufferTime cannot override or set the default of the buffer time specified in the SoundLoaderContext object that is passed to the Sound.load() method.

Sound
soundTransform The SoundTransform object that controls global sound properties.flash.media:SoundTransform The SoundTransform object that controls global sound properties. A SoundTransform object includes properties for setting volume, panning, left speaker assignment, and right speaker assignment. The SoundTransform object used in this property provides final sound settings that are applied to all sounds after any individual sound settings are applied. SoundTransform
CameraUI The CameraUI class allows you to capture a still image or video using the default camera application on a device.flash.events:EventDispatcher The CameraUI class allows you to capture a still image or video using the default camera application on a device.

The launch() method requests that the device open the default camera application. The captured image or video is available in the MediaEvent object dispatched for the complete event. Since the default camera application can save the image or video in a variety of formats, there is no guarantee that returned media object can be loaded and displayed by the AIR runtime.

On some platforms, the media object returned by the camera is accessible as a file-based media promise. On others, the media promise is not file-based and the file and relativePath properties of the MediaPromise object are null. Do not use these properties in code that is used on more than one platform.

On some platforms, the media object is automatically stored in the device media library. On those platforms on which images and video are not automatically stored by the default camera application, you can use the CameraRoll addBitmapData() function to store the media object.

On Android, the default camera application does not open if the external storage card is not available (such as when the user has mounted the card as a USB mass storage device). In addition, the AIR application that launches the camera loses focus. If the device runs low on resources, the AIR application can be terminated by the operating system before the media capture is complete.

AIR profile support: This feature is supported on mobile devices, but it is not supported on desktop operating systems or AIR for TV devices. You can test for support at run time using the CameraUI.isSupported property. See AIR Profile Support for more information regarding API support across multiple profiles.

The following example uses the CameraUI class to launch the default camera application on the device. When a picture is taken by the user, the example places the image on the display list. package { import flash.desktop.NativeApplication; import flash.display.Loader; import flash.display.MovieClip; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.ErrorEvent; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.MediaEvent; import flash.media.CameraUI; import flash.media.MediaPromise; import flash.media.MediaType; public class CameraUIStillImage extends MovieClip{ private var deviceCameraApp:CameraUI = new CameraUI(); private var imageLoader:Loader; public function CameraUIStillImage() { this.stage.align = StageAlign.TOP_LEFT; this.stage.scaleMode = StageScaleMode.NO_SCALE; if( CameraUI.isSupported ) { trace( "Initializing camera..." ); deviceCameraApp.addEventListener( MediaEvent.COMPLETE, imageCaptured ); deviceCameraApp.addEventListener( Event.CANCEL, captureCanceled ); deviceCameraApp.addEventListener( ErrorEvent.ERROR, cameraError ); deviceCameraApp.launch( MediaType.IMAGE ); } else { trace( "Camera interface is not supported."); } } private function imageCaptured( event:MediaEvent ):void { trace( "Media captured..." ); var imagePromise:MediaPromise = event.data; if( imagePromise.isAsync ) { trace( "Asynchronous media promise." ); imageLoader = new Loader(); imageLoader.contentLoaderInfo.addEventListener( Event.COMPLETE, asyncImageLoaded ); imageLoader.addEventListener( IOErrorEvent.IO_ERROR, cameraError ); imageLoader.loadFilePromise( imagePromise ); } else { trace( "Synchronous media promise." ); imageLoader.loadFilePromise( imagePromise ); showMedia( imageLoader ); } } private function captureCanceled( event:Event ):void { trace( "Media capture canceled." ); NativeApplication.nativeApplication.exit(); } private function asyncImageLoaded( event:Event ):void { trace( "Media loaded in memory." ); showMedia( imageLoader ); } private function showMedia( loader:Loader ):void { this.addChild( loader ); } private function cameraError( error:ErrorEvent ):void { trace( "Error:" + error.text ); NativeApplication.nativeApplication.exit(); } } }
Michael Chaize: Android, AIR, and the Cameracancel The cancel event is dispatched when the user closes the Camera UI without saving a picture or video.flash.events.Event.CANCELflash.events.Event The cancel event is dispatched when the user closes the Camera UI without saving a picture or video. error The error event is dispatched when the default camera cannot be opened.flash.events.ErrorEvent.ERRORflash.events.ErrorEvent The error event is dispatched when the default camera cannot be opened. complete The complete event is dispatched when the user either captures a still picture or video in the Camera UI.flash.events.MediaEvent.COMPLETEflash.events.MediaEvent The complete event is dispatched when the user either captures a still picture or video in the Camera UI. CameraUI Creates a CameraUI object. Creates a CameraUI object. launch Launches the default camera application on the device.requestedMediaTypeStringThe type of media object to capture. The valid values for this parameter are defined in the MediaType class:
  • MediaType.IMAGE
  • MediaType.VIDEO
Launches the default camera application on the device.

You can capture either still images or video with this class. Video capture uses the "Quality Low" camcorder profile on the device.

When the launch() method is called, the default camera application on the device is invoked. The AIR application loses focus and waits for the user to capture a still image or to finish capturing video. Once the desired media is captured by the user, the AIR application regains focus and this CameraUI object dispatches a complete event. If the user cancels the operation, this CameraUI object dispatches a cancel event instead.

Note: It is possible for the AIR application to be shut down by the Android operating system while it is in the background waiting for the user to capture an image or video. If this happens, the user must restart the application. The AIR application does not dispatch a media event for the previous image capture.

You can access the captured media file using the data property of the MediaEvent object dispatched for the complete event. This property is an instance of the MediaPromise class, which you can load into your application using the loadFilePromise() method of the Loader class. Note that the device camera can save captured media in a variety of formats. Video is particularly problematic in this regard. It might not be possible to display the captured media in AIR.

MediaTypeMediaPromiseLoader.loadFilePromise()completeflash.events:MediaEventDispatched when a media object is captured. Dispatched when a media object is captured.cancelflash.events:EventDispatched when the user exits from the native camera without capturing a media object. Dispatched when the user exits from the native camera without capturing a media object.errorflash.events:ErrorEventDispatched if the default camera application is already in use. Dispatched if the default camera application is already in use.errorflash.events:ErrorEventDispatched if the AIR application is in the background when it calls this function. Dispatched if the AIR application is in the background when it calls this function.
isSupported Reports whether the CameraUI class is supported on the current device.Boolean Reports whether the CameraUI class is supported on the current device.
SoundCodec The SoundCodec class is an enumeration of constant values used in setting the codec property of the Microphone class.Object The SoundCodec class is an enumeration of constant values used in setting the codec property of the Microphone class. NELLYMOSER Specifies that the Nellymoser codec be used for compressing audio.NellyMoserString Specifies that the Nellymoser codec be used for compressing audio. This constant is the default value of the Microphone.codec property. SPEEX Specifies that the Speex codec be used for compressing audio.SpeexString Specifies that the Speex codec be used for compressing audio. CameraRoll The CameraRoll class allows you to access image data in the system media library or "camera roll." AIR profile support: This feature is supported on mobile devices, but it is not supported on desktop operating systems or AIR for TV devices.flash.events:EventDispatcher The CameraRoll class allows you to access image data in the system media library or "camera roll."

AIR profile support: This feature is supported on mobile devices, but it is not supported on desktop operating systems or AIR for TV devices. See AIR Profile Support for more information regarding API support across multiple profiles.

The CameraRoll.addBitmapData() method adds an image to the device's dedicated media library. To check at run time whether your application supports the CameraRoll.addBitmapData() method, check the CameraRoll.supportsAddBitmapData property.

The CameraRoll.browseForImage() method opens an image-choosing dialog that allows a user to choose an image in the media library. When the user selects an image, the CameraRoll object dispatches a select event. Use the MediaEvent object dispatched for this event to access the chosen image. To check at run time whether your application supports the CameraRoll.browseForImage() method, check the CameraRoll.supportsBrowseForImage property.

cancel Dispatched when a user cancels a browse-for-image operation without selecting an image.flash.events.Event.CANCELflash.events.Event Dispatched when a user cancels a browse-for-image operation without selecting an image. select Dispatched when a user selects an image from the device media library.flash.events.MediaEvent.SELECTflash.events.MediaEvent Dispatched when a user selects an image from the device media library.

The MediaEvent object dispatched for this event provides access to the chosen media.

error The error event is dispatched when an error occurs.flash.events.ErrorEvent.ERRORflash.events.ErrorEvent The error event is dispatched when an error occurs.

Sources of errors include:

  • An image browser cannot be opened.
  • An image browser is already in use.
  • The AIR application attempts to browse for an image while in the background.
  • An image cannot be added to the media library.
  • A method is called that is not supported on the device.
complete Signals that an addBitmapData() operation completed successfully.flash.events.Event.COMPLETEflash.events.Event Signals that an addBitmapData() operation completed successfully. CameraRoll Creates a CameraRoll object. Creates a CameraRoll object.

There is only a single media library supported by ActionScript. All CameraRoll objects save to the same image repository.

addBitmapData Adds an image to the device camera roll.bitmapDataflash.display:BitmapDataa BitmapData object containing the image to send to the camera roll. Adds an image to the device camera roll.

To check at run time whether your application supports the CameraRoll.addBitmapData() method, check the CameraRoll.supportsAddBitmapData property.

browseForImage Opens an image browser dialog to allow the user to select an existing image from the device camera roll. Opens an image browser dialog to allow the user to select an existing image from the device camera roll.

When the user selects an image, this CameraRoll instance dispatches a select event containing a MediaEvent object. Use the data property of the MediaEvent object to load the image. The data property is a MediaPromise object, which you can load using the loadFilePromise() method of the Loader class.

To check at run time whether your application supports the CameraRoll.browseForImage() method, check the CameraRoll.supportsBrowseForImage property.

package flash.media.examples { import flash.media.CameraRoll; import flash.media.MediaPromise; import flash.media.MediaType; import flash.events.MediaEvent; import flash.events.Event; import flash.display.Loader; import flash.display.Sprite; import flash.events.IOErrorEvent; import flash.display.StageAlign; import flash.display.StageScaleMode; public class CameraRollTest extends Sprite{ private var mediaSource:CameraRoll = new CameraRoll(); public function CameraRollTest() { this.stage.align = StageAlign.TOP_LEFT; this.stage.scaleMode = StageScaleMode.NO_SCALE; if( CameraRoll.supportsBrowseForImage ) { log( "Browsing for image..." ); mediaSource.addEventListener( MediaEvent.SELECT, imageSelected ); mediaSource.addEventListener( Event.CANCEL, browseCanceled ); mediaSource.browseForImage(); } else { log( "Browsing in camera roll is not supported."); } } private var imageLoader:Loader; private function imageSelected( event:MediaEvent ):void { log( "Image selected..." ); var imagePromise:MediaPromise = event.data; imageLoader = new Loader(); if( imagePromise.isAsync ) { log( "Asynchronous media promise." ); imageLoader.contentLoaderInfo.addEventListener( Event.COMPLETE, imageLoaded ); imageLoader.contentLoaderInfo.addEventListener( IOErrorEvent.IO_ERROR, imageLoadFailed ); imageLoader.loadFilePromise( imagePromise ); } else { log( "Synchronous media promise." ); imageLoader.loadFilePromise( imagePromise ); this.addChild( imageLoader ); } } private function browseCanceled( event:Event ):void { log( "Image browse canceled." ); } private function imageLoaded( event:Event ):void { log( "Image loaded asynchronously." ); this.addChild( imageLoader ); } private function imageLoadFailed( event:Event ):void { log( "Image load failed." ); } private function log( text:String ):void { trace( text ); } } }
MediaEventMediaPromiseLoader.loadFilePromise()selectflash.events:MediaEventDispatched when the user chooses an image. Dispatched when the user chooses an image.cancelflash.events:EventDispatched when a user cancels the browse operation. Dispatched when a user cancels the browse operation.errorflash.events:ErrorEventDispatched if the default image browser application is already in use. Dispatched if the default image browser application is already in use.errorflash.events:ErrorEventDispatched if the AIR application is in the background when it calls this function. Dispatched if the AIR application is in the background when it calls this function.
supportsAddBitmapData Whether the CameraRoll.addBitmapData() method is supported.Boolean Whether the CameraRoll.addBitmapData() method is supported. Currently, the feature is only supported in AIR applications on mobile devices. supportsBrowseForImage Reports whether the CameraRoll.browseForImage() method is supported.BooleanReports whether the CameraRoll browseForImage() method is supported. Reports whether the CameraRoll.browseForImage() method is supported. Currently, the feature is only supported in AIR applications on mobile devices.
Microphone Use the Microphone class to monitor or capture audio from a microphone.flash.events:EventDispatcher Use the Microphone class to monitor or capture audio from a microphone.

To get a reference to a Microphone instance, use the Microphone.getMicrophone() method or the Microphone.getEnhancedMicrophone() method. An enhanced microphone instance can perform acoustic echo cancellation. Use acoustic echo cancellation to create real-time audio/video applications that don't require headsets.

Create a real-time chat application

To create a real-time chat application, capture audio and send it to Flash Media Server. Use the NetConnection and NetStream classes to send the audio stream to Flash Media Server. Flash Media Server can broadcast the audio to other clients. To create a chat application that doesn't require headsets, use acoustic echo cancellation. Acoustic echo cancellation prevents the feedback loop that occurs when audio enters a microphone, travels out the speakers, and enters the microphone again. To use acoustic echo cancellation, call the Microphone.getEnhancedMicrophone() method to get a reference to a Microphone instance. Set Microphone.enhancedOptions to an instance of the MicrophoneEnhancedOptions class to configure settings.

Play microphone audio locally

Call the Microphone setLoopback() method to route the microphone audio directly to the local computer or device audio output. Uncontrolled audio feedback is an inherent danger and is likely to occur whenever the audio output can be picked up by the microphone input. The setUseEchoSuppression() method can reduce, but not eliminate, the risk of feedback amplification.

Capture microphone audio for local recording or processing

To capture microphone audio, listen for the sampleData events dispatched by a Microphone instance. The SampleDataEvent object dispatched for this event contains the audio data.

For information about capturing video, see the Camera class.

Runtime microphone support

The Microphone class is not supported in Flash Player running in a mobile browser.

AIR profile support: The Microphone class is supported on desktop operating systems, but it is not supported on all mobile devices. It is not supported on AIR for TV devices. See AIR Profile Support for more information regarding API support across multiple profiles.

You can test for support at run time using the Microphone.isSupported property. Note that for AIR for TV devices, Microphone.isSupported is true but Microphone.getMicrophone() always returns null.

Privacy controls

Flash Player displays a Privacy dialog box that lets the user choose whether to allow or deny access to the microphone. Your application window size must be at least 215 x 138 pixels, the minimum size required to display the dialog box, or access is denied automatically.

Content running in the AIR application sandbox does not need permission to access the microphone and no dialog is displayed. AIR content running outside the application sandbox does require permission and the Privacy dialog is displayed.

The following example captures sound using echo suppression from a microphone after the user allows access to their computer's microphone. The Security.showSettings() method displays the Flash Player dialog box, which requests permission to access the user's microphone. The call to setLoopBack(true) reroutes input to the local speaker, so you can hear the sound while you run the example.

Two listeners listen for activity and status events. The activity event is dispatched at the start and end (if any) of the session and is captured by the activityHandler() method, which traces information on the event. The status event is dispatched if the attached microphone object reports any status information; it is captured and traced using the statusHandler() method.

Note: A microphone must be attached to your computer for this example to work correctly.

package { import flash.display.Sprite; import flash.events.*; import flash.media.Microphone; import flash.system.Security; public class MicrophoneExample extends Sprite { public function MicrophoneExample() { var mic:Microphone = Microphone.getMicrophone(); Security.showSettings("2"); mic.setLoopBack(true); if (mic != null) { mic.setUseEchoSuppression(true); mic.addEventListener(ActivityEvent.ACTIVITY, activityHandler); mic.addEventListener(StatusEvent.STATUS, statusHandler); } } private function activityHandler(event:ActivityEvent):void { trace("activityHandler: " + event); } private function statusHandler(event:StatusEvent):void { trace("statusHandler: " + event); } } }
flash.media.Cameraflash.media.MicrophoneEnhancedModeflash.media.MicrophoneEnhancedOptionsaYo Binitie: Implementing Acoustic Echo Suppression in Flash/Flex applicationsCristophe Coenraets: Voice Notes for AndroidMichael Chaize: AIR, Android, and the Microphonestatus Dispatched when a microphone reports its status.flash.events.StatusEvent.STATUSflash.events.StatusEvent Dispatched when a microphone reports its status. If the value of the code property is "Microphone.Muted", the user has refused to allow the SWF file access to the microphone. If the value of the code property is "Microphone.Unmuted", the user has allowed the SWF file access to the microphone.

Status events are not dispatched in Adobe AIR applications; access to the microphone cannot be changed dynamically. On most platforms, AIR applications can always access the microphone. On Android, an application must specify the Android RECORD_AUDIO permission in the application descriptor. Otherwise, Android denies access to the microphone altogether.

Microphone.getMicrophone()
sampleData Dispatched when the microphone has sound data in the buffer.flash.events.SampleDataEvent.SAMPLE_DATAflash.events.SampleDataEvent Dispatched when the microphone has sound data in the buffer.

The Microphone.rate property determines the number of samples generated per second. The number of samples per event is a factor of the number of samples per second and the latency between event calls.

The following example captures 4 seconds of audio samples from the default microphone and then plays the audio back. Be sure that there is a microphone attached. The the micSampleDataHandler() is the event listener for the sampleData event of the Microphone object. The micSampleDataHandler() method gets the samples as they become available and appends their values to a ByteArray object. A Timer object is set for 4 seconds. The Timer removes the sampleData event of the Microphone object event listener, creates a Sound object, and adds a sampleData event listener for the Sound object. The sampleData event listener for the Sound object, the playbackSampleHandler() method, provides audio samples for the Sound object to play. These audio samples are retrieved from the ByteArray object that stored the Microphone samples. The samples are written to the Sound object twice since the Microphone samples are recorded in monaural sound and the Sound object requests stereo pairs of samples. The rate property of the Microphone object is set to 44, to match the 44-kHz sample rate used by Sound objects. const DELAY_LENGTH:int = 4000; var mic:Microphone = Microphone.getMicrophone(); mic.setSilenceLevel(0, DELAY_LENGTH); mic.gain = 100; mic.rate = 44; mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler); var timer:Timer = new Timer(DELAY_LENGTH); timer.addEventListener(TimerEvent.TIMER, timerHandler); timer.start(); var soundBytes:ByteArray = new ByteArray(); function micSampleDataHandler(event:SampleDataEvent):void { while(event.data.bytesAvailable) { var sample:Number = event.data.readFloat(); soundBytes.writeFloat(sample); } } function timerHandler(event:TimerEvent):void { mic.removeEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler); timer.stop(); soundBytes.position = 0; var sound:Sound = new Sound(); sound.addEventListener(SampleDataEvent.SAMPLE_DATA, playbackSampleHandler); sound.play(); } function playbackSampleHandler(event:SampleDataEvent):void { for (var i:int = 0; i < 8192 && soundBytes.bytesAvailable > 0; i++) { var sample:Number = soundBytes.readFloat(); event.data.writeFloat(sample); event.data.writeFloat(sample); } } The following example captures sound using echo suppression from a microphone after the user allows access to their computer's microphone. The Security.showSettings() method displays the Flash Player dialog box, which requests permission to access the user's microphone. The call to setLoopBack(true) reroutes input to the local speaker, so you can hear the sound while you run the example.

Two listeners listen for activity and status events. The activity event is dispatched at the start and end (if any) of the session and is captured by the activityHandler() method, which traces information on the event. The status event is dispatched if the attached microphone object reports any status information; it is captured and traced using the statusHandler() method.

Note: A microphone must be attached to your computer for this example to work correctly.

package { import flash.display.Sprite; import flash.events.*; import flash.media.Microphone; import flash.system.Security; public class MicrophoneExample extends Sprite { public function MicrophoneExample() { var mic:Microphone = Microphone.getMicrophone(); Security.showSettings("2"); mic.setLoopBack(true); if (mic != null) { mic.setUseEchoSuppression(true); mic.addEventListener(ActivityEvent.ACTIVITY, activityHandler); mic.addEventListener(StatusEvent.STATUS, statusHandler); } } private function activityHandler(event:ActivityEvent):void { trace("activityHandler: " + event); } private function statusHandler(event:StatusEvent):void { trace("statusHandler: " + event); } } }
flash.events.SampleDataEvent
activity Dispatched when a microphone starts or stops recording due to detected silence.flash.events.ActivityEvent.ACTIVITYflash.events.ActivityEvent Dispatched when a microphone starts or stops recording due to detected silence.

To specify the amount of sound required to trigger this event with an activating property of true, or the amount of time that must elapse without sound to trigger this event with an activating property of false, use Microphone.setSilenceLevel().

For a Microphone object to dispatch activity events, the application must be monitoring the input, either by calling setLoopback( true ), by listening for sampleData events, or by attaching the microphone to a NetStream object.

setSilenceLevel()
getEnhancedMicrophone Returns a reference to an enhanced Microphone object that can perform acoustic echo cancellation.A reference to a Microphone object for capturing audio. If enhanced audio fails to initialize, returns null. flash.media:Microphoneindexint-1The index value of the microphone. Returns a reference to an enhanced Microphone object that can perform acoustic echo cancellation. Use acoustic echo cancellation to create audio/video chat applications that don't require headsets.

The index parameter for the Microphone.getEnhancedMicrophone() method and the Microphone.getMicrophone() method work the same way.

Important: At any given time you can have only a single instance of enhanced microphone device. All other Microphone instances stop providing audio data and receive a StatusEvent with the code property Microphone.Unavailable. When enhanced audio fails to initialize, calls to this method return null, setting a value for Microphone.enhancedOptions has no effect, and all existing Microphone instances function as before.

To configure an enhanced Microphone object, set the Microphone.enhancedOptions property. The following code uses an enhanced Microphone object and full-duplex acoustic echo cancellation in a local test:

	     var mic:Microphone = Microphone.getEnhancedMicrophone();
	     var options:MicrophoneEnhancedOptions = new MicrophoneEnhancedOptions();
	     options.mode = MicrophoneEnhancedMode.FULL_DUPLEX;
	     mic.enhancedOptions = options;
	     mic.setLoopBack(true);
	 

The setUseEchoSuppression() method is ignored when using acoustic echo cancellation.

When a SWF file tries to access the object returned by Microphone.getEnhancedMicrophone() —for example, when you call NetStream.attachAudio()— Flash Player displays a Privacy dialog box that lets the user choose whether to allow or deny access to the microphone. (Make sure your Stage size is at least 215 x 138 pixels; this is the minimum size Flash Player requires to display the dialog box.)

Microphone.getMicrophone()Microphone.enhancedOptionsMicrophone.status
getMicrophone Returns a reference to a Microphone object for capturing audio.A reference to a Microphone object for capturing audio. flash.media:Microphoneindexint-1The index value of the microphone. Returns a reference to a Microphone object for capturing audio. To begin capturing the audio, you must attach the Microphone object to a NetStream object (see NetStream.attachAudio()).

Multiple calls to Microphone.getMicrophone() reference the same microphone. Thus, if your code contains the lines mic1 = Microphone.getMicrophone() and mic2 = Microphone.getMicrophone() , both mic1 and mic2 reference the same (default) microphone.

In general, you should not pass a value for index. Simply call air.Microphone.getMicrophone() to return a reference to the default microphone. Using the Microphone Settings section in the Flash Player settings panel, the user can specify the default microphone the application should use. (The user access the Flash Player settings panel by right-clicking Flash Player content running in a web browser.) If you pass a value for index, you can reference a microphone other than the one the user chooses. You can use index in rare cases—for example, if your application is capturing audio from two microphones at the same time. Content running in Adobe AIR also uses the Flash Player setting for the default microphone.

Use the Microphone.index property to get the index value of the current Microphone object. You can then pass this value to other methods of the Microphone class.

When a SWF file tries to access the object returned by Microphone.getMicrophone() —for example, when you call NetStream.attachAudio()— Flash Player displays a Privacy dialog box that lets the user choose whether to allow or deny access to the microphone. (Make sure your Stage size is at least 215 x 138 pixels; this is the minimum size Flash Player requires to display the dialog box.)

When the user responds to this dialog box, a status event is dispatched that indicates the user's response. You can also check the Microphone.muted property to determine if the user has allowed or denied access to the microphone.

If Microphone.getMicrophone() returns null, either the microphone is in use by another application, or there are no microphones installed on the system. To determine whether any microphones are installed, use Microphones.names.length. To display the Flash Player Microphone Settings panel, which lets the user choose the microphone to be referenced by Microphone.getMicrophone, use Security.showSettings().

The following example shows how you can request access to the user's microphone using the static Microphone.getMicrophone() method and listening for the status event. Example provided by ActionScriptExamples.com. var mic:Microphone = Microphone.getMicrophone(); mic.setLoopBack(); mic.addEventListener(StatusEvent.STATUS, mic_status); var tf:TextField = new TextField(); tf.autoSize = TextFieldAutoSize.LEFT; tf.text = "Detecting microphone..."; addChild(tf); function mic_status(evt:StatusEvent):void { tf.text = "Microphone is muted?: " + mic.muted; switch (evt.code) { case "Microphone.Unmuted": tf.appendText("\n" + "Microphone access was allowed."); break; case "Microphone.Muted": tf.appendText("\n" + "Microphone access was denied."); break; } }
Microphone.statusflash.net.NetStream.attachAudio()flash.system.Security.showSettings()statusflash.events:StatusEventDispatched when a microphone reports its status. If the value of the code property is "Microphone.Muted", the user has refused to allow the SWF file access to the user's microphone. If the value of the code property is "Microphone.Unmuted", the user has allowed the SWF file access to the user's microphone. Dispatched when a microphone reports its status.
setLoopBack Routes audio captured by a microphone to the local speakers.stateBooleantrue Routes audio captured by a microphone to the local speakers. setSilenceLevel Sets the minimum input level that should be considered sound and (optionally) the amount of silent time signifying that silence has actually begun.silenceLevelNumberThe amount of sound required to activate the microphone and dispatch the activity event. Acceptable values range from 0 to 100. timeoutint-1The number of milliseconds that must elapse without activity before Flash Player or Adobe AIR considers sound to have stopped and dispatches the dispatch event. The default value is 2000 (2 seconds). (Note: The default value shown in the signature, -1, is an internal value that indicates to Flash Player or Adobe AIR to use 2000.) Sets the minimum input level that should be considered sound and (optionally) the amount of silent time signifying that silence has actually begun.
  • To prevent the microphone from detecting sound at all, pass a value of 100 for silenceLevel; the activity event is never dispatched.
  • To determine the amount of sound the microphone is currently detecting, use Microphone.activityLevel.

Speex includes voice activity detection (VAD) and automatically reduces bandwidth when no voice is detected. When using the Speex codec, Adobe recommends that you set the silence level to 0.

Activity detection is the ability to detect when audio levels suggest that a person is talking. When someone is not talking, bandwidth can be saved because there is no need to send the associated audio stream. This information can also be used for visual feedback so that users know they (or others) are silent.

Silence values correspond directly to activity values. Complete silence is an activity value of 0. Constant loud noise (as loud as can be registered based on the current gain setting) is an activity value of 100. After gain is appropriately adjusted, your activity value is less than your silence value when you're not talking; when you are talking, the activity value exceeds your silence value.

This method is similar to Camera.setMotionLevel(); both methods are used to specify when the activity event is dispatched. However, these methods have a significantly different impact on publishing streams:

  • Camera.setMotionLevel() is designed to detect motion and does not affect bandwidth usage. Even if a video stream does not detect motion, video is still sent.
  • Microphone.setSilenceLevel() is designed to optimize bandwidth. When an audio stream is considered silent, no audio data is sent. Instead, a single message is sent, indicating that silence has started.
flash.media.Camera.setMotionLevel()flash.media.Microphone.activityLevelflash.media.Microphone.activityflash.media.Microphone.gainflash.media.Microphone.silenceLevelflash.media.Microphone.silenceTimeout
setUseEchoSuppression Specifies whether to use the echo suppression feature of the audio codec.useEchoSuppressionBooleanA Boolean value indicating whether to use echo suppression (true) or not (false). Specifies whether to use the echo suppression feature of the audio codec. The default value is false unless the user has selected Reduce Echo in the Flash Player Microphone Settings panel.

Echo suppression is an effort to reduce the effects of audio feedback, which is caused when sound going out the speaker is picked up by the microphone on the same system. (This is different from acoustic echo cancellation, which completely removes the feedback. The setUseEchoSuppression() method is ignored when you call the getEnhancedMicrophone() method to use acoustic echo cancellation.)

Generally, echo suppression is advisable when the sound being captured is played through speakers — instead of a headset —. If your SWF file allows users to specify the sound output device, you may want to call Microphone.setUseEchoSuppression(true) if they indicate they are using speakers and will be using the microphone as well.

Users can also adjust these settings in the Flash Player Microphone Settings panel.

flash.media.Microphone.setUseEchoSuppression()flash.media.Microphone.useEchoSuppression
activityLevel The amount of sound the microphone is detecting.Number The amount of sound the microphone is detecting. Values range from 0 (no sound is detected) to 100 (very loud sound is detected). The value of this property can help you determine a good value to pass to the Microphone.setSilenceLevel() method.

If the microphone muted property is true, the value of this property is always -1.

flash.media.Microphone.getMicrophone()flash.media.Microphone.setSilenceLevel()flash.media.Microphone.gain
codec The codec to use for compressing audio.String The codec to use for compressing audio. Available codecs are Nellymoser (the default) and Speex. The enumeration class SoundCodec contains the various values that are valid for the codec property.

If you use the Nellymoser codec, you can set the sample rate using Microphone.rate(). If you use the Speex codec, the sample rate is set to 16 kHz.

Speex includes voice activity detection (VAD) and automatically reduces bandwidth when no voice is detected. When using the Speex codec, Adobe recommends that you set the silence level to 0. To set the silence level, use the Microphone.setSilenceLevel() method.

setSilenceLevel()
enableVAD Enable Speex voice activity detection.Boolean Enable Speex voice activity detection. encodeQuality The encoded speech quality when using the Speex codec.int The encoded speech quality when using the Speex codec. Possible values are from 0 to 10. The default value is 6. Higher numbers represent higher quality but require more bandwidth, as shown in the following table. The bit rate values that are listed represent net bit rates and do not include packetization overhead.

Quality valueRequired bit rate (kilobits per second)0 3.9515.7527.7539.80412.8516.8620.6723.8827.8934.21042.2

codec
enhancedOptions Controls enhanced microphone options.flash.media:MicrophoneEnhancedOptions Controls enhanced microphone options. For more information, see MicrophoneEnhancedOptions class. This property is ignored for non-enhanced Microphone instances. flash.media.MicrophoneEnhancedOptionsframesPerPacket Number of Speex speech frames transmitted in a packet (message).int Number of Speex speech frames transmitted in a packet (message). Each frame is 20 ms long. The default value is two frames per packet.

The more Speex frames in a message, the lower the bandwidth required but the longer the delay in sending the message. Fewer Speex frames increases bandwidth required but reduces delay.

gain The amount by which the microphone boosts the signal.Number The amount by which the microphone boosts the signal. Valid values are 0 to 100. The default value is 50. flash.media.Microphone.gainindex The index of the microphone, as reflected in the array returned by Microphone.names.int The index of the microphone, as reflected in the array returned by Microphone.names. flash.media.Microphone.getMicrophone()flash.media.Microphone.namesisSupported The isSupported property is set to true if the Microphone class is supported on the current platform, otherwise it is set to false.Boolean The isSupported property is set to true if the Microphone class is supported on the current platform, otherwise it is set to false. muted Specifies whether the user has denied access to the microphone (true) or allowed access (false).Boolean Specifies whether the user has denied access to the microphone (true) or allowed access (false). When this value changes, a status event is dispatched. For more information, see Microphone.getMicrophone(). flash.media.Microphone.getMicrophone()flash.media.Microphone.statusname The name of the current sound capture device, as returned by the sound capture hardware.String The name of the current sound capture device, as returned by the sound capture hardware. flash.media.Microphone.getMicrophone()flash.media.Microphone.namesnames An array of strings containing the names of all available sound capture devices.Array An array of strings containing the names of all available sound capture devices. The names are returned without having to display the Flash Player Privacy Settings panel to the user. This array provides the zero-based index of each sound capture device and the number of sound capture devices on the system, through the Microphone.names.length property. For more information, see the Array class entry.

Calling Microphone.names requires an extensive examination of the hardware, and it may take several seconds to build the array. In most cases, you can just use the default microphone.

Note: To determine the name of the current microphone, use the name property.

Arrayflash.media.Microphone.nameflash.media.Microphone.getMicrophone()
noiseSuppressionLevel Maximum attenuation of the noise in dB (negative number) used for Speex encoder.int Maximum attenuation of the noise in dB (negative number) used for Speex encoder. If enabled, noise suppression is applied to sound captured from Microphone before Speex compression. Set to 0 to disable noise suppression. Noise suppression is enabled by default with maximum attenuation of -30 dB. Ignored when Nellymoser codec is selected. rate The rate at which the microphone is capturing sound, in kHz.int The rate at which the microphone is capturing sound, in kHz. Acceptable values are 5, 8, 11, 22, and 44. The default value is 8 kHz if your sound capture device supports this value. Otherwise, the default value is the next available capture level above 8 kHz that your sound capture device supports, usually 11 kHz.

Note: The actual rate differs slightly from the rate value, as noted in the following table:

rate valueActual frequency4444,100 Hz2222,050 Hz1111,025 Hz88,000 Hz55,512 Hz
flash.media.Microphone.rate
silenceLevel The amount of sound required to activate the microphone and dispatch the activity event.Number The amount of sound required to activate the microphone and dispatch the activity event. The default value is 10. flash.media.Microphone.gainflash.media.Microphone.setSilenceLevel()silenceTimeout The number of milliseconds between the time the microphone stops detecting sound and the time the activity event is dispatched.int The number of milliseconds between the time the microphone stops detecting sound and the time the activity event is dispatched. The default value is 2000 (2 seconds).

To set this value, use the Microphone.setSilenceLevel() method.

flash.media.Microphone.setSilenceLevel()
soundTransform Controls the sound of this microphone object when it is in loopback mode.flash.media:SoundTransform Controls the sound of this microphone object when it is in loopback mode. useEchoSuppression Set to true if echo suppression is enabled; false otherwise.Boolean Set to true if echo suppression is enabled; false otherwise. The default value is false unless the user has selected Reduce Echo in the Flash Player Microphone Settings panel. flash.media.Microphone.setUseEchoSuppression()
StageVideo The StageVideo object uses the device's hardware acceleration capabilities, if available, to display live or recorded video in an application.flash.events:EventDispatcher The StageVideo object uses the device's hardware acceleration capabilities, if available, to display live or recorded video in an application. Hardware acceleration capabilities are available on most devices. The StageVideo object supports the same video formats as the Video object. See the flash.net.NetStream class for more information about these formats.

AIR profile support: In AIR 2.5, this feature is supported only on devices that run AIR for TV. See AIR Profile Support for more information regarding API support across multiple profiles.

The video displayed by the StageVideo object always appears in a rectangular area on the stage behind all Flash display list objects. Therefore, the StageVideo object takes advantage of hardware acceleration while supporting the most common case for displaying video: a rectangular display area overlaid with video controls.

The benefits to using a StageVideo object instead of the Video object are:

  • Optimal video playback performance because of using hardware acceleration.
  • Decreased processor and power usage.
  • Flexibility and creativity for development of content, such as video controls, that appears in front of the StageVideo object.

Because the StageVideo object uses device hardware capabilities, a StageVideo object is subject to the following constraints compared to a Video object:

  • The video display area can only be a rectangle. You cannot use more advanced display areas, such as elliptical or irregular shapes.
  • You cannot rotate a StageVideo object.
  • You cannot bitmap cache a StageVideo object,
  • You cannot use BitmapData to access the video data.
  • You cannot embed the video in the SWF file. You can use a StageVideo object only with the NetStream object.
  • You cannot apply filters, blend modes, or alpha values to a StageVideo object.
  • You cannot apply color transforms, 3D transforms, or matrix transforms to a StageVideo object.
  • You cannot apply a mask or scale9Grid to a StageVideo object.
  • Blend modes that you apply to display objects that are in front of a StageVideo object do not apply to the StageVideo object.
  • You can place a StageVideo object only on full pixel boundaries.
  • For each SWF file, Flash Player limits the number of StageVideo objects that can concurrently display videos to four. However, the actual limit can be lower, depending on device hardware resources. On AIR for TV devices, only one StageVideo object at a time can display a video.
  • The video timing is not synchronized with the timing of Flash content that the runtime displays.
  • Though the video presentation is the best available for the given device hardware, it is not 100% "pixel identical" across devices. Slight variations can occur due to driver and hardware differences.
  • A few devices do not support all required color spaces. For example, some devices do not support BT.709, the H.264 standard. In such cases you can use BT.601 for fast display.
  • You cannot use stage video with WMODE settings such as normal, opaque, or transparent. Stage video supports only WMODE=direct when not in full screen mode. WMODE has no effect in Safari 4 or higher, IE 9 or higher, or in AIR for TV.

The following steps summarize how to use a StageVideo object to play a video:

  1. Listen for the StageVideoAvailabilityEvent.STAGE_VIDEO_AVAILABILITY event to find out when the Stage.stageVideos vector has changed. (Not supported for AIR 2.5 for TV.)
  2. If the StageVideoAvailabilityEvent.STAGE_VIDEO_AVAILABILITY event reports that stage video is available, use the Stage.stageVideos Vector object within the event handler to access a StageVideo object. In AIR 2.5 for TV, access Stage.stageVideos after the first SWF frame has rendered. Note: You cannot create a StageVideo object.
  3. Attach a NetStream object using StageVideo.attachNetStream().
  4. Play the video using NetStream.play().
  5. Listen for the StageVideoEvent.RENDER_STATE event on the StageVideo object to determine the status of playing the video. Receipt of this event also indicates that the width and height properties of the video have been initialized or changed.
  6. Listen for the VideoEvent.RENDER_STATE event on the Video object. This event provides the same statuses as StageVideoEvent.RENDER_STATE, so you can also use it to determine whether GPU acceleration is available. Receipt of this event also indicates that the width and height properties of the video have been initialized or changed. (Not supported for AIR 2.5 for TV.)

If a StageVideoEvent.RENDER_STATE event indicates that the video cannot be played, you can revert to using a Video object instead of a StageVideo object. This event is dispatched after the video has been attached to a NetStream object and is playing. Also, depending on the platform, any change in the playing status can result in dispatching the event. Handle the StageVideoEvent.RENDER_STATE event to ensure that the application plays the video or gracefully does not play the video.

If a running video goes into full screen mode from a WMODE that does not support stage video, stage video can become available. Likewise, if the user exits full screen mode, stage video can become unavailable. In these cases, the Stage.stageVideos vector changes. To receive notification of this change, listen to the StageVideoAvailabilityEvent.STAGE_VIDEO_AVAILABITY event. NOTE: This notification is not available in AIR 2.5 for TV.

flash.events.StageVideoEventflash.events.StageVideoAvailabilityEventflash.events.VideoEventflash.display.Stage.stageVideosflash.media.Videoflash.net.NetStreamUsing the StageVideo class for hardware-accelerated renderingrenderState Dispatched by the StageVideo object when the render state of the StageVideo object changes.flash.events.StageVideoEvent.RENDER_STATEflash.events.StageVideoEvent Dispatched by the StageVideo object when the render state of the StageVideo object changes. attachNetStream Specifies a video stream to be displayed within the boundaries of the StageVideo object in the application.netStreamflash.net:NetStreamA NetStream object. To drop the connection to the StageVideo object, pass null. Specifies a video stream to be displayed within the boundaries of the StageVideo object in the application. The video stream is either a video file played with NetStream.play(), or null. A video file can be stored on the local file system or on Flash Media Server. If the value of the netStream argument is null, the video is no longer played in the StageVideo object.

Before calling attachNetStream() a second time, call the currently attached NetStream object's close() method. Calling close() releases all the resources, including hardware decoders, involved with playing the video. Then you can call attachNetStream() with either another NetStream object or null.

You do not need to use this method if a video file contains only audio; the audio portion of a video file is played automatically when you call NetStream.play(). To control the audio associated with a video file, use the soundTransform property of the NetStream object that plays the video file.

colorSpaces Returns the names of available color spaces for this video surface. Returns the names of available color spaces for this video surface. Usually this list includes "BT.601" and "BT.709". On some configurations, only "BT.601" is supported which means a video is possibly not rendered in the correct color space.

Note: On AIR for TV devices, a value of "BT.601" indicates software playback, and a value of "BT.709" indicates hardware playback.

depth The depth level of a StageVideo object relative to other StageVideo objects.intThe depth of a StageVideo object relative to other StageVideo objects. The depth level of a StageVideo object relative to other StageVideo objects.

StageVideo objects always display behind other objects on the stage. If a platform supports more than one StageVideo object, the depth property indicates a StageVideo object's depth level. The bottom StageVideo object's depth property has the smallest value. If multiple StageVideo objects have the same depth setting, the order they appear in the stage,stageVideos Vector determines their relative depth.

Note: AIR for TV devices support only one StageVideo object. Therefore, this property is not applicable for those devices. It is a placeholder for future support on other devices.

flash.display.Stage.stageVideos
pan The pan setting for displaying the video, specified as a Point object.flash.geom:PointThe Point value is not valid. RangeErrorRangeErrorDetermines which rectangle of a zoomed video is displayed. The pan setting for displaying the video, specified as a Point object.

By default, the value of pan is (0,0). This default value centers the video in the rectangle specified by StageVideo.viewPort.

The pan value is significant only when the zoom property value is not the default value (1.0, 1.0). When a video displays in the StageVideo.viewPort rectangle with the default zoom value, the platform sizes the video to fit exactly into the rectangle. Therefore, the entire video is visible. However, if a zoom factor is specified, the entire video is not visible. In this case, you can set the pan value to specify which subrectangle of the video to show in the StageVideo.viewPort rectangle.

The valid values of the pan property range from (-1.0, -1.0) to (1.0, 1.0). Specifically:

  • A pan value of (-1.0, -1.0) places the upper-left pixel of the video at the upper-left position of the StageVideo.viewPort rectangle.
  • A pan value of (1.0, 1.0) places the lower-right pixel of the video at the lower-right position of the StageVideo.viewPort rectangle.
  • A pan value of (1.0, -1.0) places the upper-right pixel of the video at the upper-right position of the StageVideo.viewPort rectangle.
  • A pan value of (-1.0, 1.0) places the lower-left pixel of the video at the lower-left position of the StageVideo.viewPort rectangle.

Values between -1.0 and 1.0 pan according to scale.

If you set the pan property to a value outside the valid range, a RangeError exception is thrown. The runtime resets the value to the last valid value.

Also, consider that to use a StageVideo object, you assign an element of the Stage.stageVideos Vector object to a StageVideo variable. When you set the pan property of the StageVideo variable, the underlying Stage.stageVideos Vector element also changes. If you later assign that element to another StageVideo variable to play another video, reset the pan property.

zoom
videoHeight An integer specifying the height of the video stream, in pixels.int An integer specifying the height of the video stream, in pixels.

You may want to use this property, for example, to ensure that the user is seeing the video at the same height at which it was captured, regardless of the size of the StageVideo.viewPort rectangle.

videoWidth An integer specifying the width of the video stream, in pixels.int An integer specifying the width of the video stream, in pixels.

You may want to use this property, for example, to ensure that the user is seeing the video at the same width at which it was captured, regardless of the size of the StageVideo.viewPort rectangle.

viewPort The absolute position and size of the video surface in pixels.flash.geom:RectangleThe Rectangle value is not valid. RangeErrorRangeError The absolute position and size of the video surface in pixels.

The position of the video is relative to the upper left corner of the stage.

The valid range of the x and y properties of the viewPort Rectangle object are -8192 to 8191. Therefore, you can position the video completely or partially off the stage. You can also make the video larger than the stage if you make the width and height properties of the viewPort property larger than the stage.

zoom The zoom setting of the video, specified as a Point object.flash.geom:PointThe Point value is not valid. RangeErrorRangeErrorThe zoom setting of the video. The zoom setting of the video, specified as a Point object.

The zoom point is a scale factor. By default, the value of zoom is (1.0, 1.0). This default value displays the entire video in the StageVideo.viewPort rectangle.

The valid values of the zoom property range from (1.0, 1.0) to (16.0, 16.0). The x property of the zoom Point object specifies the zoom value for the horizontal pixels, and the y property specifies the zoom value for the vertical pixels.

For example, a zoom value of (2.0, 2.0) displays only half the horizontal pixels and half the vertical pixels in the StageVideo.viewPort rectangle. That is, the video still fills the StageVideo.viewPort rectangle, but only half the video is visible, creating a 2x zoom effect. Similarly, a zoom value of (16.0, 16.0) displays only 1/16 of the horizontal pixels and 1/16 of the vertical pixels in the StageVideo.viewPort rectangle, zooming in the maximum amount of 16x.

When you set the zoom property, set the pan property so that the StageVideo.viewPort rectangle shows the appropriate subrectangle of the video.

Consider the following situation where it is useful to set a different value for the x and y properties of the zoom Point object. First, note that when a video displays in the StageVideo.viewPort rectangle with the default zoom value, the platform sizes the video to fit exactly into the rectangle. If the video's rectangle does not scale evenly to the StageVideo.viewPort rectangle, the video display can be distorted. That is, the aspect ratios of the video and the StageVideo.viewPort are not equal. This case can occur, for example, if the video has a different width than height, but the StageVideo.viewPort property specifies a square. To resolve the distortion, set different values for the x and y properties of the zoom Point object. Then set the pan property to make sure the StageVideo.viewPort rectangle shows the appropriate subrectangle of the video.

If you set the zoom property to a value outside the valid range, a RangeError exception is thrown. The runtime resets the value to the last valid value.

Also, consider that to use a StageVideo object, you assign an element of the Stage.stageVideos Vector object to a StageVideo variable. When you set the zoom property of the StageVideo variable, the underlying Stage.stageVideos Vector element also changes. If you later assign that element to another StageVideo variable to play another video, reset the zoom property.

pan
MicrophoneEnhancedOptions The MicrophoneEnhancedOptions class provides configuration options for enhanced audio (acoustic echo cancellation).Object The MicrophoneEnhancedOptions class provides configuration options for enhanced audio (acoustic echo cancellation). Acoustic echo cancellation allows multiple parties to communicate in an audio/video chat application without using headsets.

To use acoustic echo cancellation, call Microphone.getEnhancedMicrophone() to get a reference to an enhanced Microphone object. Set the Microphone.enhancedOptions property to an instance of the MicrophoneEnhancedOptions class.

flash.media.Microphone.enhancedOptionsflash.media.Microphone.getEnhancedMicrophone()autoGain Enable automatic gain control.Boolean Enable automatic gain control. A time-domain automatic gain control algorithm is used with noise gating. The default value is off. echoPath Specifies the echo path (in milliseconds) used for acoustic echo cancellation.int Specifies the echo path (in milliseconds) used for acoustic echo cancellation. A longer echo path results in better echo cancellation. A longer echo path also causes a longer delay and requires more computational complexity. The default value is 128 (recommended). The other possible value is 256. isVoiceDetected Indicates whether the Microphone input detected a voice.int Indicates whether the Microphone input detected a voice.

Possible values are: -1, not enabled; 0, a voice is not detected; 1, a voice is detected.

mode Controls enhanced microphone mode.String Controls enhanced microphone mode. The default value is FULL_DUPLEX for all microphones that aren't USB. The default value for USB microphones is HALF_DUPLEX. See MicrophoneEnhancedMode for possible values and descriptions. flash.media.MicrophoneEnhancedModenonLinearProcessing Enable non-linear processing.Boolean Enable non-linear processing. Non-linear processing suppresses the residual echo when one person is talking. The time-domain non-linear processing technique is used. Turn off non-linear processing for music sources. The default value is true which turns on non-linear processing.
Sound The Sound class lets you work with sound in an application.flash.events:EventDispatcher The Sound class lets you work with sound in an application. The Sound class lets you create a Sound object, load and play an external MP3 file into that object, close the sound stream, and access data about the sound, such as information about the number of bytes in the stream and ID3 metadata. More detailed control of the sound is performed through the sound source — the SoundChannel or Microphone object for the sound — and through the properties in the SoundTransform class that control the output of the sound to the computer's speakers.

In Flash Player 10 and later and AIR 1.5 and later, you can also use this class to work with sound that is generated dynamically. In this case, the Sound object uses the function you assign to a sampleData event handler to poll for sound data. The sound is played as it is retrieved from a ByteArray object that you populate with sound data. You can use Sound.extract() to extract sound data from a Sound object, after which you can manipulate it before writing it back to the stream for playback.

To control sounds that are embedded in a SWF file, use the properties in the SoundMixer class.

Note: The ActionScript 3.0 Sound API differs from ActionScript 2.0. In ActionScript 3.0, you cannot take sound objects and arrange them in a hierarchy to control their properties.

When you use this class, consider the following security model:

  • Loading and playing a sound is not allowed if the calling file is in a network sandbox and the sound file to be loaded is local.
  • By default, loading and playing a sound is not allowed if the calling file is local and tries to load and play a remote sound. A user must grant explicit permission to allow this type of access.
  • Certain operations dealing with sound are restricted. The data in a loaded sound cannot be accessed by a file in a different domain unless you implement a cross-domain policy file. Sound-related APIs that fall under this restriction are Sound.id3, SoundMixer.computeSpectrum(), SoundMixer.bufferTime, and the SoundTransform class.

However, in Adobe AIR, content in the application security sandbox (content installed with the AIR application) are not restricted by these security limitations.

For more information related to security, see the Flash Player Developer Center Topic: Security.

The following example displays information about sound events that take place as an MP3 file is opened and played. To run this example, place a file named MySound.mp3 in the same directory as your SWF file. package { import flash.display.Sprite; import flash.events.*; import flash.media.Sound; import flash.media.SoundChannel; import flash.net.URLRequest; public class SoundExample extends Sprite { private var url:String = "MySound.mp3"; private var song:SoundChannel; public function SoundExample() { var request:URLRequest = new URLRequest(url); var soundFactory:Sound = new Sound(); soundFactory.addEventListener(Event.COMPLETE, completeHandler); soundFactory.addEventListener(Event.ID3, id3Handler); soundFactory.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); soundFactory.addEventListener(ProgressEvent.PROGRESS, progressHandler); soundFactory.load(request); song = soundFactory.play(); } private function completeHandler(event:Event):void { trace("completeHandler: " + event); } private function id3Handler(event:Event):void { trace("id3Handler: " + event); } private function ioErrorHandler(event:Event):void { trace("ioErrorHandler: " + event); } private function progressHandler(event:ProgressEvent):void { trace("progressHandler: " + event); } } }
flash.net.NetStreamMicrophoneSoundChannelSoundMixerSoundTransformprogress Dispatched when data is received as a load operation progresses.flash.events.ProgressEvent.PROGRESSflash.events.ProgressEvent Dispatched when data is received as a load operation progresses. load()open Dispatched when a load operation starts.flash.events.Event.OPENflash.events.Event Dispatched when a load operation starts. load()ioError Dispatched when an input/output error occurs that causes a load operation to fail.flash.events.IOErrorEvent.IO_ERRORflash.events.IOErrorEvent Dispatched when an input/output error occurs that causes a load operation to fail. load()id3 Dispatched by a Sound object when ID3 data is available for an MP3 sound.flash.events.Event.ID3flash.events.Event Dispatched by a Sound object when ID3 data is available for an MP3 sound. Sound.id3complete Dispatched when data has loaded successfully.flash.events.Event.COMPLETEflash.events.Event Dispatched when data has loaded successfully. load()sampleData Dispatched when the runtime requests new audio data.flash.events.SampleDataEvent.SAMPLE_DATAflash.events.SampleDataEvent Dispatched when the runtime requests new audio data. The following example plays a simple sine wave. var mySound:Sound = new Sound(); function sineWaveGenerator(event:SampleDataEvent):void { for ( var c:int=0; c<8192; c++ ) { event.data.writeFloat(Math.sin((Number(c+event.position)/Math.PI/2))*0.25); event.data.writeFloat(Math.sin((Number(c+event.position)/Math.PI/2))*0.25); } } mySound.addEventListener(SampleDataEvent.SAMPLE_DATA,sineWaveGenerator); mySound.play(); extract()play()flash.events.SampleDataEventSound Creates a new Sound object.streamflash.net:URLRequestnull The URL that points to an external MP3 file. contextflash.media:SoundLoaderContextnull An optional SoundLoader context object, which can define the buffer time (the minimum number of milliseconds of MP3 data to hold in the Sound object's buffer) and can specify whether the application should check for a cross-domain policy file prior to loading the sound. Creates a new Sound object. If you pass a valid URLRequest object to the Sound constructor, the constructor automatically calls the load() function for the Sound object. If you do not pass a valid URLRequest object to the Sound constructor, you must call the load() function for the Sound object yourself, or the stream will not load.

Once load() is called on a Sound object, you can't later load a different sound file into that Sound object. To load a different sound file, create a new Sound object.

In Flash Player 10 and later and AIR 1.5 and later, instead of using load(), you can use the sampleData event handler to load sound dynamically into the Sound object.
close Closes the stream, causing any download of data to cease.The stream could not be closed, or the stream was not open. IOErrorflash.errors:IOError Closes the stream, causing any download of data to cease. No data may be read from the stream after the close() method is called. In the following example, when the user clicks on the Stop button, the Sound.close() method will be called and the sound will stop streaming.

In the constructor, a text field is created for the Start and Stop button. When the user clicks on the text field, the clickHandler() method is invoked. It handles the starting and stopping of the sound file. Note that depending on the network connection or when the user clicks the Stop button, much of the file could already have been loaded and it may take a while for the sound file to stop playing. A try...catch block is used to catch any IO error that may occur while closing the stream. For example, if the sound is loaded from a local directory and not streamed, error 2029 is caught, stating, "This URLStream object does not have an open stream."

package { import flash.display.Sprite; import flash.net.URLRequest; import flash.media.Sound; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.events.MouseEvent; import flash.errors.IOError; import flash.events.IOErrorEvent; public class Sound_closeExample extends Sprite { private var snd:Sound = new Sound(); private var button:TextField = new TextField(); private var req:URLRequest = new URLRequest("http://av.adobe.com/podcast/csbu_dev_podcast_epi_2.mp3"); public function Sound_closeExample() { button.x = 10; button.y = 10; button.text = "START"; button.border = true; button.background = true; button.selectable = false; button.autoSize = TextFieldAutoSize.LEFT; button.addEventListener(MouseEvent.CLICK, clickHandler); this.addChild(button); } private function clickHandler(e:MouseEvent):void { if(button.text == "START") { snd.load(req); snd.play(); snd.addEventListener(IOErrorEvent.IO_ERROR, errorHandler); button.text = "STOP"; } else if(button.text == "STOP") { try { snd.close(); button.text = "Wait for loaded stream to finish."; } catch (error:IOError) { button.text = "Couldn't close stream " + error.message; } } } private function errorHandler(event:IOErrorEvent):void { button.text = "Couldn't load the file " + event.text; } } }
extract Extracts raw sound data from a Sound object.The number of samples written to the ByteArray specified in the target parameter. Numbertargetflash.utils:ByteArrayA ByteArray object in which the extracted sound samples are placed. lengthNumberThe number of sound samples to extract. A sample contains both the left and right channels — that is, two 32-bit floating-point values. startPositionNumber-1The sample at which extraction begins. If you don't specify a value, the first call to Sound.extract() starts at the beginning of the sound; subsequent calls without a value for startPosition progress sequentially through the file. Extracts raw sound data from a Sound object.

This method is designed to be used when you are working with dynamically generated audio, using a function you assign to the sampleData event for a different Sound object. That is, you can use this method to extract sound data from a Sound object. Then you can write the data to the byte array that another Sound object is using to stream dynamic audio.

The audio data is placed in the target byte array starting from the current position of the byte array. The audio data is always exposed as 44100 Hz Stereo. The sample type is a 32-bit floating-point value, which can be converted to a Number using ByteArray.readFloat().

The following example loads an mp3 file and uses the extract() method of the Sound class to access the audio data.

The mp3 data is loaded into the sourceSnd Sound object. When the application loads the mp3 data, it calls the loaded() function (the event handler for the complete event of the sourceSnd object). A second Sound object, outputSound, is used to play the modified audio. The outputSound object has a sampleData event listener; so the object dispatches periodical sampleData events once you call the play() method of the object. The upOctave() method returns a byte array of modified audio data based on the source audio data. It returns audio that is one octave higher by skipping over every other audio sample in the source data. The event handler for the sampleData event writes the returned byte array to the data property of the outputSound object. The data byte array is appended to the output audio data for the outputSound object.

To test this example, add a test.mp3 file to the same directory as the SWF file.

var sourceSnd:Sound = new Sound(); var outputSnd:Sound = new Sound(); var urlReq:URLRequest = new URLRequest("test.mp3"); sourceSnd.load(urlReq); sourceSnd.addEventListener(Event.COMPLETE, loaded); function loaded(event:Event):void { outputSnd.addEventListener(SampleDataEvent.SAMPLE_DATA, processSound); outputSnd.play(); } function processSound(event:SampleDataEvent):void { var bytes:ByteArray = new ByteArray(); sourceSnd.extract(bytes, 4096); event.data.writeBytes(upOctave(bytes)); } function upOctave(bytes:ByteArray):ByteArray { var returnBytes:ByteArray = new ByteArray(); bytes.position = 0; while(bytes.bytesAvailable > 0) { returnBytes.writeFloat(bytes.readFloat()); returnBytes.writeFloat(bytes.readFloat()); if (bytes.bytesAvailable > 0) { bytes.position += 8; } } return returnBytes; }
play()sampleData
load Initiates loading of an external MP3 file from the specified URL.A network error caused the load to fail. IOErrorflash.errors:IOErrorLocal untrusted files may not communicate with the Internet. You can work around this by reclassifying this file as local-with-networking or trusted. SecurityErrorSecurityErrorYou cannot connect to commonly reserved ports. For a complete list of blocked ports, see "Restricting Networking APIs" in the ActionScript 3.0 Developer's Guide. SecurityErrorSecurityErrorThe digest property of the stream object is not null. You should only set the digest property of a URLRequest object when calling the URLLoader.load() method when loading a SWZ file (an Adobe platform component). IOErrorflash.errors:IOErrorstreamflash.net:URLRequest A URL that points to an external MP3 file. contextflash.media:SoundLoaderContextnull An optional SoundLoader context object, which can define the buffer time (the minimum number of milliseconds of MP3 data to hold in the Sound object's buffer) and can specify whether the application should check for a cross-domain policy file prior to loading the sound. Initiates loading of an external MP3 file from the specified URL. If you provide a valid URLRequest object to the Sound constructor, the constructor calls Sound.load() for you. You only need to call Sound.load() yourself if you don't pass a valid URLRequest object to the Sound constructor or you pass a null value.

Once load() is called on a Sound object, you can't later load a different sound file into that Sound object. To load a different sound file, create a new Sound object.

When using this method, consider the following security model:

  • Calling Sound.load() is not allowed if the calling file is in the local-with-file-system sandbox and the sound is in a network sandbox.
  • Access from the local-trusted or local-with-networking sandbox requires permission from a website through a URL policy file.
  • You cannot connect to commonly reserved ports. For a complete list of blocked ports, see "Restricting Networking APIs" in the ActionScript 3.0 Developer's Guide.
  • You can prevent a SWF file from using this method by setting the allowNetworking parameter of the object and embed tags in the HTML page that contains the SWF content.

In Flash Player 10 and later, if you use a multipart Content-Type (for example "multipart/form-data") that contains an upload (indicated by a "filename" parameter in a "content-disposition" header within the POST body), the POST operation is subject to the security rules applied to uploads:

  • The POST operation must be performed in response to a user-initiated action, such as a mouse click or key press.
  • If the POST operation is cross-domain (the POST target is not on the same server as the SWF file that is sending the POST request), the target server must provide a URL policy file that permits cross-domain access.

Also, for any multipart Content-Type, the syntax must be valid (according to the RFC2046 standards). If the syntax appears to be invalid, the POST operation is subject to the security rules applied to uploads.

In Adobe AIR, content in the application security sandbox (content installed with the AIR application) are not restricted by these security limitations.

For more information related to security, see the Flash Player Developer Center Topic: Security.

The following example displays the loading progress of a sound file.

In the constructor a URLRequest object is created to identify the location of the sound file, which is a podcast from Adobe. The file is loaded in a try...catch block in order to catch any error that may occur while loading the file. If an IO error occurred, the errorHandler() method also is invoked and the error message is written in the text field intended for the progress report. While a load operation is in progress, a ProgressEvent.PROGRESS event is dispatched and the progressHandler() method is called. Here, ProgressEvent.PROGRESS event is used as a timer for calculating the load progress.

The progressHandler() method divides the bytesLoaded value passed with the ProgressEvent object by the bytesTotal value to arrive at a percentage of the sound data that is being loaded. It then displays these values in the text field. (Note that if the file is small, cached, or the file is in the local directory, the progress may not be noticeable.)

package { import flash.display.Sprite; import flash.net.URLRequest; import flash.media.Sound; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.events.ProgressEvent; import flash.events.IOErrorEvent; public class Sound_loadExample extends Sprite { private var snd:Sound = new Sound(); private var statusTextField:TextField = new TextField(); public function Sound_loadExample(){ statusTextField.autoSize = TextFieldAutoSize.LEFT; var req:URLRequest = new URLRequest("http://av.adobe.com/podcast/csbu_dev_podcast_epi_2.mp3"); try { snd.load(req); snd.play(); } catch (err:Error) { trace(err.message); } snd.addEventListener(IOErrorEvent.IO_ERROR, errorHandler); snd.addEventListener(ProgressEvent.PROGRESS, progressHandler); this.addChild(statusTextField); } private function progressHandler(event:ProgressEvent):void { var loadTime:Number = event.bytesLoaded / event.bytesTotal; var LoadPercent:uint = Math.round(100 * loadTime); statusTextField.text = "Sound file's size in bytes: " + event.bytesTotal + "\n" + "Bytes being loaded: " + event.bytesLoaded + "\n" + "Percentage of sound file that is loaded " + LoadPercent + "%.\n"; } private function errorHandler(errorEvent:IOErrorEvent):void { statusTextField.text = "The sound could not be loaded: " + errorEvent.text; } } }
play Generates a new SoundChannel object to play back the sound.A SoundChannel object, which you use to control the sound. This method returns null if you have no sound card or if you run out of available sound channels. The maximum number of sound channels available at once is 32. flash.media:SoundChannelstartTimeNumber0The initial position in milliseconds at which playback should start. loopsint0Defines the number of times a sound loops back to the startTime value before the sound channel stops playback. sndTransformflash.media:SoundTransformnullThe initial SoundTransform object assigned to the sound channel. Generates a new SoundChannel object to play back the sound. This method returns a SoundChannel object, which you access to stop the sound and to monitor volume. (To control the volume, panning, and balance, access the SoundTransform object assigned to the sound channel.) In the following example, once the file is loaded, the user using a graphic bar can select the starting position (starting time) of the sound file.

The constructor calls the Sound.load() method to start loading the sound data. Next it calls the Sound.play() method which will start playing the sound as soon as enough data has loaded. The Sound.play() method returns a SoundChannel object that can be used to control the playback of the sound. The text field displays the instructions. To make sure the content of where the user wants the sound to start, has already been loaded, the bar Sprite object is created and displayed after the file has finished loading. An Event.COMPLETE event is dispatched when the file is successfully loaded, which triggers the completeHandler() method. The completeHandler() method then creates the bar and adds it to the display list. (A sprite object is used instead of a shape object to support interactivity.) When the user clicks on the bar, the clickHandler() method is triggered.

In the clickHandler() method, the position of x coordinate of the user's click, event.localX, is used to determine where the user wants the file to start. Since the bar is 100 pixels and it starts at x coordinate 100 pixels, it is easy to determine the percentage of the position. Also, since the file is loaded, the length property of the sound file will have the length of the complete file in milliseconds. Using the length of the sound file and the position in the line, a starting position for the sound file is determined. After stopping the sound from playing, the sound file restarts at the selected starting position, which is past as the startTime parameter to the play() method.

package { import flash.display.Sprite; import flash.display.Graphics; import flash.events.MouseEvent; import flash.media.Sound;; import flash.net.URLRequest; import flash.media.SoundChannel; import flash.events.ProgressEvent; import flash.events.Event; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.events.IOErrorEvent; public class Sound_playExample1 extends Sprite { private var snd:Sound = new Sound(); private var channel:SoundChannel = new SoundChannel(); private var infoTextField:TextField = new TextField(); public function Sound_playExample1() { var req:URLRequest = new URLRequest("MySound.mp3"); infoTextField.autoSize = TextFieldAutoSize.LEFT; infoTextField.text = "Please wait for the file to be loaded.\n" + "Then select from the bar to decide where the file should start."; snd.load(req); channel = snd.play(); snd.addEventListener(IOErrorEvent.IO_ERROR, errorHandler); snd.addEventListener(Event.COMPLETE, completeHandler); this.addChild(infoTextField); } private function completeHandler(event:Event):void { infoTextField.text = "File is ready."; var bar:Sprite = new Sprite(); bar.graphics.lineStyle(5, 0xFF0000); bar.graphics.moveTo(100, 100); bar.graphics.lineTo(200, 100); bar.addEventListener(MouseEvent.CLICK, clickHandler); this.addChild(bar); } private function clickHandler(event:MouseEvent):void { var position:uint = event.localX; var percent:uint = Math.round(position) - 100; var cue:uint = (percent / 100) * snd.length; channel.stop(); channel = snd.play(cue); } private function errorHandler(errorEvent:IOErrorEvent):void { infoTextField.text = "The sound could not be loaded: " + errorEvent.text; } } }
In the following example, depending on whether the user single or double clicks on a button the sound will play once or twice.

In the constructor, the sound is loaded and a simple rectangle button sprite object is created. (A sprite object is used instead of a shape object to support interactivity.) Here, it is assumed that the sound file is in the same directory as the SWF file. (There is no error handling code for this example.)

Two event listeners are set up to respond to single mouse clicks and double clicks. If the user clicks once, the clickHandler() method is invoked, which plays the sound. If the user double clicks on the button, the doubleClickHandler() method is invoked, which will play the sound file twice. The second argument of the play() method is set to 1, which means the sound will loop back once to the starting time of the sound and play again. The starting time, first argument, is set to 0, meaning the file will play from the beginning.

package { import flash.display.Sprite; import flash.events.MouseEvent; import flash.media.Sound; import flash.net.URLRequest; public class Sound_playExample2 extends Sprite { private var button:Sprite = new Sprite(); private var snd:Sound = new Sound(); public function Sound_playExample2() { var req:URLRequest = new URLRequest("click.mp3"); snd.load(req); button.graphics.beginFill(0x00FF00); button.graphics.drawRect(10, 10, 50, 30); button.graphics.endFill(); button.addEventListener(MouseEvent.CLICK, clickHandler); button.addEventListener(MouseEvent.DOUBLE_CLICK, doubleClickHandler); this.addChild(button); } private function clickHandler(event:MouseEvent):void { snd.play(); } private function doubleClickHandler(event:MouseEvent):void { snd.play(0, 2); } } }
The following example displays the loading and playing progress of a sound file.

In the constructor, the file is loaded in a try...catch block in order to catch any error that may occur while loading the file. A listener is added to the sound object that will respond to an IOErrorEvent event by calling the errorHandler() method. Another listener is added for the main application that will respond to an Event.ENTER_FRAME event, which is used as the timing mechanism for showing playback progress. Finally, a third listener is added for the sound channel that will respond to an Event.SOUND_COMPLETE event (when the sound has finished playing), by calling the soundCompleteHandler() method. The soundCompleteHandler() method also removes the event listener for the Event.ENTER_FRAME event.

The enterFrameHandler() method divides the bytesLoaded value passed with the ProgressEvent object by the bytesTotal value to arrive at a percentage of the sound data that is being loaded. The percentage of sound data that is being played could be determined by dividing the value of sound channel's position property by the length of the sound data. However, if the sound data is not fully loaded, the length property of the sound object shows only the size of the sound data that is currently loaded. An estimate of the eventual size of the full sound file is calculated by dividing the value of the current sound object's length by the value of the bytesLoaded property divided by the value of the bytesTotal property.

Note that if the file is small, cached, or the file is in the local directory, the load progress may not be noticeable. Also the lag time between when the sound data starts loading and the loaded data starts playing is determined by the value of the SoundLoaderContext.buffertime property, which is by default 1000 milliseconds and can be reset.

package { import flash.display.Sprite; import flash.net.URLRequest; import flash.media.Sound; import flash.media.SoundChannel; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.events.Event; import flash.events.IOErrorEvent; public class Sound_playExample3 extends Sprite { private var snd:Sound = new Sound(); private var channel:SoundChannel; private var statusTextField:TextField = new TextField(); public function Sound_playExample3(){ statusTextField.autoSize = TextFieldAutoSize.LEFT; var req:URLRequest = new URLRequest("http://av.adobe.com/podcast/csbu_dev_podcast_epi_2.mp3"); try { snd.load(req); channel = snd.play(); } catch (err:Error) { trace(err.message); } snd.addEventListener(IOErrorEvent.IO_ERROR, errorHandler); addEventListener(Event.ENTER_FRAME, enterFrameHandler); channel.addEventListener(Event.SOUND_COMPLETE, soundCompleteHandler); this.addChild(statusTextField); } private function enterFrameHandler(event:Event):void { var loadTime:Number = snd.bytesLoaded / snd.bytesTotal; var loadPercent:uint = Math.round(100 * loadTime); var estimatedLength:int = Math.ceil(snd.length / (loadTime)); var playbackPercent:uint = Math.round(100 * (channel.position / estimatedLength)); statusTextField.text = "Sound file's size is " + snd.bytesTotal + " bytes.\n" + "Bytes being loaded: " + snd.bytesLoaded + "\n" + "Percentage of sound file that is loaded " + loadPercent + "%.\n" + "Sound playback is " + playbackPercent + "% complete."; } private function errorHandler(errorEvent:IOErrorEvent):void { statusTextField.text = "The sound could not be loaded: " + errorEvent.text; } private function soundCompleteHandler(event:Event):void { statusTextField.text = "The sound has finished playing."; removeEventListener(Event.ENTER_FRAME, enterFrameHandler); } } }
SoundChannel.stop()SoundMixer.stopAll()
bytesLoaded Returns the currently available number of bytes in this sound object.uint Returns the currently available number of bytes in this sound object. This property is usually useful only for externally loaded files. bytesTotal Returns the total number of bytes in this sound object.int Returns the total number of bytes in this sound object. id3 Provides access to the metadata that is part of an MP3 file.flash.media:ID3Info Provides access to the metadata that is part of an MP3 file.

MP3 sound files can contain ID3 tags, which provide metadata about the file. If an MP3 sound that you load using the Sound.load() method contains ID3 tags, you can query these properties. Only ID3 tags that use the UTF-8 character set are supported.

Flash Player 9 and later and AIR support ID3 2.0 tags, specifically 2.3 and 2.4. The following tables list the standard ID3 2.0 tags and the type of content the tags represent. The Sound.id3 property provides access to these tags through the format my_sound.id3.COMM, my_sound.id3.TIME, and so on. The first table describes tags that can be accessed either through the ID3 2.0 property name or the ActionScript property name. The second table describes ID3 tags that are supported but do not have predefined properties in ActionScript.

ID3 2.0 tagCorresponding Sound class propertyCOMMSound.id3.commentTALBSound.id3.album TCONSound.id3.genreTIT2Sound.id3.songName TPE1Sound.id3.artistTRCKSound.id3.track TYERSound.id3.year

The following table describes ID3 tags that are supported but do not have predefined properties in the Sound class. You access them by calling mySound.id3.TFLT, mySound.id3.TIME, and so on. NOTE: None of these tags are supported in Flash Lite 4.

PropertyDescriptionTFLTFile typeTIMETimeTIT1Content group descriptionTIT2Title/song name/content descriptionTIT3Subtitle/description refinementTKEYInitial keyTLANLanguagesTLENLengthTMEDMedia typeTOALOriginal album/movie/show titleTOFNOriginal filenameTOLYOriginal lyricists/text writersTOPEOriginal artists/performersTORYOriginal release yearTOWNFile owner/licenseeTPE1Lead performers/soloistsTPE2Band/orchestra/accompanimentTPE3Conductor/performer refinementTPE4Interpreted, remixed, or otherwise modified byTPOSPart of a setTPUBPublisherTRCKTrack number/position in setTRDARecording datesTRSNInternet radio station nameTRSOInternet radio station ownerTSIZSizeTSRCISRC (international standard recording code)TSSESoftware/hardware and settings used for encodingTYERYearWXXXURL link frame

When using this property, consider the Flash Player security model:

  • The id3 property of a Sound object is always permitted for SWF files that are in the same security sandbox as the sound file. For files in other sandboxes, there are security checks.
  • When you load the sound, using the load() method of the Sound class, you can specify a context parameter, which is a SoundLoaderContext object. If you set the checkPolicyFile property of the SoundLoaderContext object to true, Flash Player checks for a URL policy file on the server from which the sound is loaded. If a policy file exists and permits access from the domain of the loading SWF file, then the file is allowed to access the id3 property of the Sound object; otherwise it is not.

However, in Adobe AIR, content in the application security sandbox (content installed with the AIR application) are not restricted by these security limitations.

For more information related to security, see the Flash Player Developer Center Topic: Security.

The following example reads the ID3 information from a sound file and displays it in a text field.

In the constructor, the sound file is loaded but it is not set to play. Here, it is assumed that the file is in the SWF directory. The system must have permission in order to read the ID3 tags of a loaded sound file. If there is ID3 information in the file and the program is permitted to read it, an Event.ID3 event will be fired and the id3 property of the sound file will be populated. The id3 property contains an ID3Info object with all of the ID3 information.

In the id3Handler() method, the file's ID3 tags are stored in id3, an ID3Info class object. A text field is instantiated to display the list of the ID3 tags. The for loop iterates through all the ID3 2.0 tags and appends the name and value to the content of the text field. Using ID3 info (ID3Info) properties, the artist, song name, and album are also appended. ActionScript 3.0 and Flash Player 9 and later support ID3 2.0 tags, specifically 2.3 and 2.4. If you iterate through properties like in the for loop, only ID3 2.0 tags will appear. However, the data from the earlier versions are also stored in the song's id3 property and can be accessed using ID3 info class properties. The tags for the ID3 1.0 are at the end of the file while the ID3 2.0 tags are in the beginning of the file. (Sometimes, the files may have both earlier and later version tags in the same place.) If a file encoded with both version 1.0 and 2.0 tags at the beginning and the end of the file, the method id3Handler() will be invoked twice. It first reads the 2.0 version and then the version 1.0. If only ID3 1.0 tag is available, then the information is accessible via the ID3 info properties, like id3.songname. For ID3 2.0, id3.TITS property will retrieve the song name using the new tag (TITS).

Note that no error handling is written for this example and if the ID3 content is long, the result may go beyond the viewable area.

package { import flash.display.Sprite; import flash.media.Sound; import flash.net.URLRequest; import flash.media.ID3Info; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.events.Event; public class Sound_id3Example extends Sprite { private var snd:Sound = new Sound(); private var myTextField:TextField = new TextField(); public function Sound_id3Example() { snd.addEventListener(Event.ID3, id3Handler); snd.load(new URLRequest("mySound.mp3")); } private function id3Handler(event:Event):void { var id3:ID3Info = snd.id3; myTextField.autoSize = TextFieldAutoSize.LEFT; myTextField.border = true; myTextField.appendText("Received ID3 Info: \n"); for (var propName:String in id3) { myTextField.appendText(propName + " = " + id3[propName] + "\n"); } myTextField.appendText("\n" + "Artist: " + id3.artist + "\n"); myTextField.appendText("Song name: " + id3.songName + "\n"); myTextField.appendText("Album: " + id3.album + "\n\n"); this.addChild(myTextField); } } }
SoundLoaderContext.checkPolicyFile
isBuffering Returns the buffering state of external MP3 files.Boolean Returns the buffering state of external MP3 files. If the value is true, any playback is currently suspended while the object waits for more data. isURLInaccessible Indicates if the Sound.url property has been truncated.Boolean Indicates if the Sound.url property has been truncated. When the isURLInaccessible value is true the Sound.url value is only the domain of the final URL from which the sound loaded. For example, the property is truncated if the sound is loaded from http://www.adobe.com/assets/hello.mp3, and the Sound.url property has the value http://www.adobe.com. The isURLInaccessible value is true only when all of the following are also true:
  • An HTTP redirect occurred while loading the sound file.
  • The SWF file calling Sound.load() is from a different domain than the sound file's final URL.
  • The SWF file calling Sound.load() does not have permission to access the sound file. Permission is granted to access the sound file the same way permission is granted for the Sound.id3 property: establish a policy file and use the SoundLoaderContext.checkPolicyFile property.

Note: The isURLInaccessible property was added for Flash Player 10.1 and AIR 2.0. However, this property is made available to SWF files of all versions when the Flash runtime supports it. So, using some authoring tools in "strict mode" causes a compilation error. To work around the error use the indirect syntax mySound["isURLInaccessible"], or disable strict mode. If you are using Flash Professional CS5 or Flex SDK 4.1, you can use and compile this API for runtimes released before Flash Player 10.1 and AIR 2.

For application content in AIR, the value of this property is always false.

urlid3flash.media.SoundLoaderContext.checkPolicyFile
length The length of the current sound in milliseconds.Number The length of the current sound in milliseconds. url The URL from which this sound was loaded.String The URL from which this sound was loaded. This property is applicable only to Sound objects that were loaded using the Sound.load() method. For Sound objects that are associated with a sound asset from a SWF file's library, the value of the url property is null.

When you first call Sound.load(), the url property initially has a value of null, because the final URL is not yet known. The url property will have a non-null value as soon as an open event is dispatched from the Sound object.

The url property contains the final, absolute URL from which a sound was loaded. The value of url is usually the same as the value passed to the stream parameter of Sound.load(). However, if you passed a relative URL to Sound.load() the value of the url property represents the absolute URL. Additionally, if the original URL request is redirected by an HTTP server, the value of the url property reflects the final URL from which the sound file was actually downloaded. This reporting of an absolute, final URL is equivalent to the behavior of LoaderInfo.url.

In some cases, the value of the url property is truncated; see the isURLInaccessible property for details.

load()flash.display.LoaderInfo.urlisURLInaccessible
MicrophoneEnhancedMode The MicrophoneEnhancedMode class is an enumeration of constant values used in setting the mode property of MicrophoneEnhancedOptions class.Object The MicrophoneEnhancedMode class is an enumeration of constant values used in setting the mode property of MicrophoneEnhancedOptions class. flash.media.MicrophoneEnhancedOptionsFULL_DUPLEX Use this mode to allow both parties to talk at the same time.fullDuplexString Use this mode to allow both parties to talk at the same time. Acoustic echo cancellation operates in full-duplex mode. Full-duplex mode is the highest quality echo cancellation. This mode requires high-quality microphones and speakers and the most computing power. Do not use this mode with a USB microphone. HALF_DUPLEX Use this mode for older and lower-quality speakers and microphones.halfDuplexString Use this mode for older and lower-quality speakers and microphones. Acoustic echo cancellation operates in half-duplex mode. In half-duplex mode, only one party can speak at a time. Half-duplex mode requires simpler processing than full-duplex mode. Half-duplex mode is the default mode for USB microphone devices.

If the application uses the default enhancedOptions setting and a USB mic, Flash Player automatically switches to halfDuplex mode. If the application uses the default enhancedOptions setting and the built-in microphone, Flash Player uses fullDuplex mode.

HEADSET Use this mode when both parties are using headsets.headsetString Use this mode when both parties are using headsets. Acoustic echo cancellation operates in low-echo mode. This mode requires the least amount of computing power. OFF All enhanced audio functionality is turned off.offString All enhanced audio functionality is turned off. SPEAKER_MUTE Use this mode when the speaker is muted.speakerMuteString Use this mode when the speaker is muted. Acoustic echo cancellation is turned off. Enhanced audio performs noise suppression or automatic gain control (if enabled). flash.media.MicrophoneEnhancedOptions.autoGain
Video The Video class displays live or recorded video in an application without embedding the video in your SWF file.flash.display:DisplayObject The Video class displays live or recorded video in an application without embedding the video in your SWF file. This class creates a Video object that plays either of the following kinds of video: recorded video files stored on a server or locally, or live video captured by the user. A Video object is a display object on the application's display list and represents the visual space in which the video runs in a user interface.

When used with Flash Media Server, the Video object allows you to send live video captured by a user to the server and then broadcast it from the server to other users. Using these features, you can develop media applications such as a simple video player, a video player with multipoint publishing from one server to another, or a video sharing application for a user community.

Flash Player 9 and later supports publishing and playback of FLV files encoded with either the Sorenson Spark or On2 VP6 codec and also supports an alpha channel. The On2 VP6 video codec uses less bandwidth than older technologies and offers additional deblocking and deringing filters. See the flash.net.NetStream class for more information about video playback and supported formats.

Flash Player 9.0.115.0 and later supports mipmapping to optimize runtime rendering quality and performance. For video playback, Flash Player uses mipmapping optimization if you set the Video object's smoothing property to true.

As with other display objects on the display list, you can control various properties of Video objects. For example, you can move the Video object around on the Stage by using its x and y properties, you can change its size using its height and width properties, and so on.

To play a video stream, use attachCamera() or attachNetStream() to attach the video to the Video object. Then, add the Video object to the display list using addChild().

If you are using Flash Professional, you can also place the Video object on the Stage rather than adding it with addChild(), like this:

  1. If the Library panel isn't visible, select Window > Library to display it.
  2. Add an embedded Video object to the library by clicking the Options menu on the right side of the Library panel title bar and selecting New Video.
  3. In the Video Properties dialog box, name the embedded Video object for use in the library and click OK.
  4. Drag the Video object to the Stage and use the Property Inspector to give it a unique instance name, such as my_video. (Do not name it Video.)

In AIR applications on the desktop, playing video in fullscreen mode disables any power and screen saving features (when allowed by the operating system).

Note: The Video class is not a subclass of the InteractiveObject class, so it cannot dispatch mouse events. However, you can call the addEventListener() method on the display object container that contains the Video object.

The following example uses a Video object with the NetConnection and NetStream classes to load and play an FLV file. To run this example, you need an FLV file whose name and location match the variable passed to videoURL, in this case, an FLV file called Video.flv that is in the same directory as the SWF file.

In this example, the code that creates the Video and NetStream objects and calls Video.attachNetStream() and NetStream.play() is placed in a handler function. The handler is called only if the attempt to connect to the NetConnection object is successful, which is when the netStatus event returns an info object with a code property that indicates success. It is recommended that you wait for a successful connection before calling NetStream.play().

package { import flash.display.Sprite; import flash.events.*; import flash.media.Video; import flash.net.NetConnection; import flash.net.NetStream; public class VideoExample extends Sprite { private var videoURL:String = "Video.flv"; private var connection:NetConnection; private var stream:NetStream; public function VideoExample() { connection = new NetConnection(); connection.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler); connection.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler); connection.connect(null); } private function netStatusHandler(event:NetStatusEvent):void { switch (event.info.code) { case "NetConnection.Connect.Success": connectStream(); break; case "NetStream.Play.StreamNotFound": trace("Unable to locate video: " + videoURL); break; } } private function connectStream():void { stream = new NetStream(connection); stream.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler); stream.addEventListener(AsyncErrorEvent.ASYNC_ERROR, asyncErrorHandler); var video:Video = new Video(); video.attachNetStream(stream); stream.play(videoURL); addChild(video); } private function securityErrorHandler(event:SecurityErrorEvent):void { trace("securityErrorHandler: " + event); } private function asyncErrorHandler(event:AsyncErrorEvent):void { // ignore AsyncErrorEvent events. } } }
attachCamera()attachNetStream()flash.media.Camera.getCamera()flash.net.NetConnectionflash.net.NetStreamflash.display.DisplayObjectContainer.addChild()flash.display.Stage.addChild()Working with VideoVideo Creates a new Video instance.widthint320The width of the video, in pixels. heightint240The height of the video, in pixels. Creates a new Video instance. If no values for the width and height parameters are supplied, the default values are used. You can also set the width and height properties of the Video object after the initial construction, using Video.width and Video.height. When a new Video object is created, values of zero for width or height are not allowed; if you pass zero, the defaults will be applied.

After creating the Video, call the DisplayObjectContainer.addChild() or DisplayObjectContainer.addChildAt() method to add the Video object to a parent DisplayObjectContainer object.

The following example shows how to load an external FLV file: var MyVideo:Video = new Video(); addChild(MyVideo); var MyNC:NetConnection = new NetConnection(); MyNC.connect(null); var MyNS:NetStream = new NetStream(MyNC); MyNS.play("http://www.helpexamples.com/flash/video/clouds.flv"); MyVideo.attachNetStream(MyNS); //the clouds.flv video has metadata we're not using, so create //an error handler to ignore the message generated by the runtime //about the metadata MyNS.addEventListener(AsyncErrorEvent.ASYNC_ERROR, asyncErrorHandler); function asyncErrorHandler(event:AsyncErrorEvent):void { //ignore metadata error message }
attachCamera Specifies a video stream from a camera to be displayed within the boundaries of the Video object in the application.cameraflash.media:CameraA Camera object that is capturing video data. To drop the connection to the Video object, pass null. Specifies a video stream from a camera to be displayed within the boundaries of the Video object in the application.

Use this method to attach live video captured by the user to the Video object. You can play the live video locally on the same computer or device on which it is being captured, or you can send it to Flash Media Server and use the server to stream it to other users.

Note: In an iOS AIR application, camera video cannot be displayed when the application uses GPU rendering mode.

Please see the Camera.getCamera() method example for an illustration of how to use this method.
Video.attachNetStream()flash.media.Camera
attachNetStream Specifies a video stream to be displayed within the boundaries of the Video object in the application.netStreamflash.net:NetStreamA NetStream object. To drop the connection to the Video object, pass null. Specifies a video stream to be displayed within the boundaries of the Video object in the application. The video stream is either a video file played with NetStream.play(), a Camera object, or null. If you use a video file, it can be stored on the local file system or on Flash Media Server. If the value of the netStream argument is null, the video is no longer played in the Video object.

You do not need to use this method if a video file contains only audio; the audio portion of video files is played automatically when you call NetStream.play(). To control the audio associated with a video file, use the soundTransform property of the NetStream object that plays the video file.

Please see the example at the end of this class for an illustration of how to use this method.
Video.attachCamera()flash.net.NetStream.soundTransformflash.net.NetStream.play()SoundTransform
clear Clears the image currently displayed in the Video object (not the video stream). Clears the image currently displayed in the Video object (not the video stream). This method is useful for handling the current image. For example, you can clear the last image or display standby information without hiding the Video object. Video.attachCamera()deblocking Indicates the type of filter applied to decoded video as part of post-processing.int Indicates the type of filter applied to decoded video as part of post-processing. The default value is 0, which lets the video compressor apply a deblocking filter as needed.

Compression of video can result in undesired artifacts. You can use the deblocking property to set filters that reduce blocking and, for video compressed using the On2 codec, ringing.

Blocking refers to visible imperfections between the boundaries of the blocks that compose each video frame. Ringing refers to distorted edges around elements within a video image.

Two deblocking filters are available: one in the Sorenson codec and one in the On2 VP6 codec. In addition, a deringing filter is available when you use the On2 VP6 codec. To set a filter, use one of the following values:

  • 0—Lets the video compressor apply the deblocking filter as needed.
  • 1—Does not use a deblocking filter.
  • 2—Uses the Sorenson deblocking filter.
  • 3—For On2 video only, uses the On2 deblocking filter but no deringing filter.
  • 4—For On2 video only, uses the On2 deblocking and deringing filter.
  • 5—For On2 video only, uses the On2 deblocking and a higher-performance On2 deringing filter.

If a value greater than 2 is selected for video when you are using the Sorenson codec, the Sorenson decoder defaults to 2.

Using a deblocking filter has an effect on overall playback performance, and it is usually not necessary for high-bandwidth video. If a user's system is not powerful enough, the user may experience difficulties playing back video with a deblocking filter enabled.

smoothing Specifies whether the video should be smoothed (interpolated) when it is scaled.Boolean Specifies whether the video should be smoothed (interpolated) when it is scaled. For smoothing to work, the runtime must be in high-quality mode (the default). The default value is false (no smoothing).

For video playback using Flash Player 9.0.115.0 and later versions, set this property to true to take advantage of mipmapping image optimization.

videoHeight An integer specifying the height of the video stream, in pixels.int An integer specifying the height of the video stream, in pixels. For live streams, this value is the same as the Camera.height property of the Camera object that is capturing the video stream. For recorded video files, this value is the height of the video.

You may want to use this property, for example, to ensure that the user is seeing the video at the same size at which it was captured, regardless of the actual size of the Video object on the Stage.

flash.media.Camera.height
videoWidth An integer specifying the width of the video stream, in pixels.int An integer specifying the width of the video stream, in pixels. For live streams, this value is the same as the Camera.width property of the Camera object that is capturing the video stream. For recorded video files, this value is the width of the video.

You may want to use this property, for example, to ensure that the user is seeing the video at the same size at which it was captured, regardless of the actual size of the Video object on the Stage.

flash.media.Camera.width