Monday, June 27, 2005

List of Camera Phones

Nokia 6680
* Two integrated cameras: 1.3 megapixel and VGA
* High-speed connections with 3G and EDGE
* Two-way video call capability for face-to-face communication
* Video sharing capability
* Nokia XpressPrint printing solution including PictBridge direct printing
* Fast, convenient email access
* High-resolution, 262,144-color display
* Music player with stereo audio

Sony Ericsson V800
Features of the Sony Ericsson V800 include:
* Video calling
* 3G entertainment services - live video streaming (news, sport, etc)
* TFT colour internal screen (176 x 220 pixels, 262,144 colours)
* Rotating camera - 1.3 megapixels, plus video camera
* MP3 player
* Portable stereo handset included
* Connectivity: Bluetooth™, InfraRed & cable (USB & RS232)
* Multi-tasking
* Memory: Memory Stick PRO Duo™ (32Mbyte card supplied)
* GPRS, WAP 2.0

Sony Ericsson P910i
Features of the Sony Ericsson P910i include:
* Colour display (262,000 colours; 208 x 320 pixels)
* WAP 1.2.1 (GPRS), built-in web browser
* High Speed Data (HSCSD - up to 28.8 kbps)
* Integrated VGA digital camera (640 x 480 resolution)
* Integrated camcorder
* Enhanced Messaging Service (EMS) & Multimedia Messaging Service (MMS) - send pictures, sound and video like text messages
* MP3 audio & video (MPEG4) player
* Pen-based user interface & handwriting recognition
* Symbian operating system
* 64 Mbytes memory plus external Memory Stick Duo™
* Bluetooth™, infrared, USB & RS232 cable connectivity

Nokia 6630
Features of the Nokia 6630 include:
* Series 60 smartphone with 3G functionality
* Display: Active Matrix TFT display with 65,536 colours, 176 x 208 pixels
* 1.3 megapixel camera (1280 x 960 pixels) with 6x smooth digital zoom
* Video recording up to one hour per clip (174 x 144 pixels)
* Video editing with Nokia Video Editor, Movie Director application for automated fun video production
* Real Player for playing video and music (including MPEG4, MP3, AAC and other formats)
* Fast data: 3G (WDCMA) & EDGE broadband internet (WDCMA: maximum download 384kbps; upload 128kbps - EDGE: maximum download 236.8kbps; upload 118.4kbps )
* Java™ applications - MIDP 2.0, CLDC 1.1, 3D API (JSR-184)
* WAP 2.0, EGPRS
* Connectivity: Bluetooth™ (full support for data and headsets), USB 2.0, IrDa
* Memory: 10 Mbyte user memory + 64 Mbyte hot swap reduced size MMC card

Samsung D500
Features of the Samsung D500 include:
* 1.3 megapixel camera (1280 x 1024 pixels) with flash and digital zoom
* Video recording & messaging (176 x 144 pixels)
* 262,144 colour TFT LCD display (176 x 220 pixels)
* Music player (MP3/AAC)
* Java™ games (3 embedded games)
* Connectivity: Bluetooth™, USB, IrDa, SyncML
* Memory: 80 Mbytes
* WAP 2.0, GPRS class 10

Sony Ericsson K700i
Features of the Sony Ericsson K700i include:
* Colour display (65,536 colours, 176 x 220 pixels)
* Integrated VGA camera with video recorder (including sound)
* Bluetooth™, infrared and cable (USB & RS232) connectivity
* Downloadable Java™ games
* FM radio and MP3 player
* WAP 2.0 (GPRS), email
* High Speed Data (HSCSD - up to 28.8 kbps)
* Voice memo / recorder
* Memory: 41 Mbytes

Sony Ericsson S700i
Features of the Sony Ericsson K700i include:
* 1.3 megapixel camera (1280 x 960 pixels) with video recorder (176 x 144 resolution) and sound recorder
* Video playback and download
* 262k colour TFT screen (240 x 320 pixels)
* Connectivity: Bluetooth™, infrared, USB, RS232 cable
* WAP 2.0, GPRS, high-speed data
* Memory: 32 Mbytes, plus external memory cards (32 Mbytes supplied)

Motorola V3
Features of the Motorola V3 include:
* Internal display: 176 x 220 pixels (34.8 x 43.5mm), up to 262k TFT colours, with graphic accelerator
* External display: 96 x 80 pixels, 4k CSTN colours
* Integrated VGA camera (640 x 480 resolution) with 4x digital zoom
* Video clip playback (MPEG4 with sound)
* Bluetooth™ wireless connectivity, plus enhanced mini-USB
* WAP 2.0, GPRS
* Internal memory: up to 5 Mbytes

Samsung E720
Features of the Samsung E720 include:
* Main display: TFT, 262k colours, 176 x 220 pixels
* External display: OLED, 65k colours, 96 x 96 pixels
* Megapixel camera (1152 x 864 pixels) with digital zoom (x4), effects and flash
* Video recorder with sound (352 x 288 pixels; up to 1 hour)
* Voice recorder
* MP3 player (supports AAC/AAC+ formats)
* Connectivity: Bluetooth, USB, SyncML
* WAP 2.0, GPRS class 10
* Memory: 88.5 Mbytes total (80 Mbytes for images, sound & video; 4 Mbytes for Java; 3 Mbytes for email; 1.5 Mbytes for MMS; 200 SMS)

Nokia 6230
Features of the Nokia 6230 include:
* Integrated digital camera (640 x 480 resolution)
* Video recorder and player (record up to 2 MB; equal to 5 min, subject to available memory)
* MMS (Multimedia Messaging Service)
* Bluetooth
* 6 Mbytes shared memory (expandable)
* Built-in stereo FM radio
* Music player for MP3 and AAC files
* WAP 2.0, GPRS, HSCSD & EDGE (max 236.8 kbits/s download)
* Built-in infra-red modem

Image Capture From Webcams using the Java Media Framework API


Java Media Framework (JMF) API 1.0 was intended to enable the playback of audio, video, and other time-based media from Java technology applets and applications. Version 2.0 added the ability to capture, stream, and transcode multiple media formats. This article shows how the JMF API can be used to capture single images from a standard webcam.

In addition to the standard Java Development Kit (JDK), a separate, platform-specific performance pack is required for the JMF API. (See the for further details.) Due to the required support for hardware, the example code in this article was developed and tested on the Microsoft Windows platform using a Logitech USB Webcam.

Saturday, June 25, 2005


I. Review of issues for secure video streaming collection

Users distributing unauthorized copies via peer-to-peer file sharing.

Watermarking of streaming video – Watermarking is the embedding of a signal into a video stream that is imperceptible when the stream is viewed but can be detected by a watermark detector. The embedded watermark must be detectable even when the watermarked video is altered.

Temporary loss of video and artifacts arising from network congestion and transmission error.

Forward error correction – additional bandwidth must be consumed for transmitting the error correction information and that overhead can be significant.

Error resilience – exploit redundancy in the bit stream syntax to minimize the effect of any damage that is not corrected by FEC, such as by isolating the errors, recovering data that may have been lost after desynchronization, and error concealment.

Error concealment [25, 31] can be used to minimize the visual impact arising from data loss when the video is displayed to the user by predicting the contents of missing or corrupted areas in the video.

Heterogeneity of receivers – video stream should be decodeable at optimal quality for users with a good network connection, and at useable quality for users with a poor connection.

II.Relevant technologies and protocols: HTTP/HTTPS, RTP


The Internet-standard protocol for the transport of real-time data, including audio and video
can be used for media-on-demand as well as interactive services such as Internet telephony
RTP consists of a data and a control part. The latter is called RTCP. The data part of RTP is a thin protocol providing support for applications with real-time properties such as continuous media (e.g., audio and video), including timing reconstruction, loss detection, security and content identification. RTCP provides support for real-time conferencing of groups of any size within an internet. This support includes source identification and support for gateways like audio and video bridges as well as multicast-to-unicast translators. It offers quality-of-service feedback from receivers to the multicast group as well as support for the synchronization of different media streams.

UDP/IP is its initial target networking environment, but efforts have been made to make RTP transport-independent

RTP does not address the issue of resource reservation or quality of service control; instead, it relies on resource reservation protocols such as RSVP.


It's a mobile weblog. A moblog. A blog for people with camera phones. A photoblog.

You take photos, shoot video or capture audio with your camera phone and then email them to us, direct from the phone, where-ever you are. The media file is
then put online in your very own mobile-multi-mediablog - instantly sharing the moment, as the adverts like to say.

When you upload a media file (or more than one at once), you can add a title (as the email subject) which will be displayed as the title of the post, and accompianing text as the email body.

Unlike most other moblog services, we don't presume to claim ownership of any of your stuff, you're free to choose your own license for your photos, video and audio. In addition to this, we'll never sell or misuse your personal data (email address, personal details, site use patterns and so on) Relax, rights and privacy are as important to us as they are to you.

EarthCam Launches Webcam Site for Mobile Phone Users

EarthCam Launches Webcam Site for Mobile Phone Users
New Mobile Application Allows People to Interact with Live Webcams and Share Mobile Webcam Pictures from Camera Phones

New York, March 25, 2004 – - EarthCam (, the leader in providing streaming video webcam software and technology, is pleased to announce the launch of EarthCam Mobile, a new webcam site for mobile phone users. EarthCam Mobile is a wireless application ( with an online companion site ( that enables people to watch live webcam feeds and broadcast their own personal webcam on a mobile phone. Additionally, EarthCam Mobile allows camera phone users to instantly post mobile webcam pictures to a personal online album.

EarthCam Mobile empowers people with a personal visual communications tool at their fingertips. From live views of local traffic and tourist destinations to members sharing their own webcams, EarthCam Mobile provides members with a vast amount of real-time image and weather content to watch on a wireless device. EarthCam Mobile offers users the ability to create and edit a webcam 'Favorites List', receive messages from other members, search cameras, share links with friends and view photos all on their mobile phones.

"It is our vision that camera phones will increasingly become the next generation webcams. With camera phone sales in the USA expected to double to 12 million units this year, EarthCam Mobile is dedicated to providing compelling content and applications for this exciting mobile webcam market," said Brian Cury, founder and CEO of EarthCam. "Throughout the past year, we have been working with leading wireless carriers to develop our application and will be announcing partnership agreements during the second quarter."

Members can easily manage their mobile phone account through the EarthCam Mobile online website. Offering two levels of membership - a Basic service for free and a Premium service for $2.99 per month - EarthCam Mobile provides a suite of tools to assist with accessing and editing all the available features. The website allows members to not only search EarthCam's worldwide network of more than 1000 cameras, but also search the database of other member cams and mobile webcam albums. After creating a 'MyCam' profile, an easy-to-use interface allows members to add cameras and photos to their 'Favorites List' at anytime. Additionally, each member can customize and edit their own web page to post mobile webcam pictures from their camera phone. These mobile webcam pictures can be organized into a variety of personalized albums for private viewing or to be shared with friends.

EarthCam Mobile offers a proprietary software application, EarthCam Mobile Broadcaster, which enables members to broadcast and share their personal webcam with friends, family and other EarthCam Mobile members. The Broadcaster allows members to produce a live broadcast from a PC-based webcam to a mobile phone and invite others to view the broadcast. Provided free to members, features of the Broadcaster include motion detection with SMS alerts, multiple skins for customization, and record with playback capability. is the leading network of live webcams and, as the premier webcam portal, offers the most comprehensive search engine of webcams from around the world. EarthCam provides complete infrastructure services to manage, host and maintain live streaming video camera systems for it's corporate clients. Clients include MSN, Yahoo!, Intel, AOL/Time Warner, ESPN, Panasonic, Ford, NASA, Kodak, Toys "R" Us, Turner Construction, the Army Corps of Engineers, Lockheed Martin, New York City Department of Transportation, City of Chicago, Verizon, Motorola, This Old House and As the foremost provider of live images on the Internet, EarthCam webcasts events such as the Daytona 500, Mardi Gras, Bon Jovi live in concert, New Year's Eve in Times Square and the Super Bowl for the NFL. is EarthCam's full-service e-commerce site dedicated solely to providing webcam solutions. Visit EarthCam at

Thursday, June 23, 2005

Video Streaming Product


Webcam32, the #1 selling webcam software*, makes it easy to capture live streaming video and broadcast it on your web page. Many Webcam32 customers use their live webcam to significantly increase their web site traffic. Viewers do not need a plug-in or any additional software to view your live streaming video from their web browser.

Build Your Own Mobile Streaming Production Solution

Incorporating streaming video into the curriculum is a cutting-edge way to access educational programming and provide lectures and study tools to students anytime, anywhere. It’s an excellent way to showcase student video projects throughout the school and the community. Your Mobile Streaming Production Solution can also make exciting school activities such as sporting events or commencement available to the public, promoting parent involvement and increasing community support for your school.

The Apple Mobile Streaming Production Solution perfectly complements the Apple Streaming Backbone Solution. The Streaming Backbone Solution includes the server and storage for delivering streaming content on your institution's network. With the Mobile Streaming Production Solution class lectures, educational programming, student work and live school events can be produced anytime, anywhere.

Good, better and best configurations provide starting points for your custom Mobile Streaming Production Solution. All configurations include an iBook (Good) or Powerbook (Better and Best). Each solution also includes an iSight camera so you can begin producing streaming content right at your desktop. A range of Apple and third party options to enhance your Mobile Streaming Production Solution are also available.

Mobile Streaming

View Multimedia on the Internet!

As streaming enters the mobile environment, Nokia blazes a trail to help bring you closer to the opportunities and advantages of mobile streaming. The Nokia 9210i Communicator is the first Nokia product that supports video and audio streaming over high-speed circuit-switched data (HSCSD) networks.

With the help of streaming, you can enjoy multimedia content without having to download an entire file. The latest news, your favorite songs, sports highlights, and even live broadcasts are all within your reach.

Surviving the Ultimate Reliability Test

Take the net traveler's word for it: when trekking up the highest mountains, or forging through the deepest jungles, there is no room for surprises. Whether you have to double-check the best route back to base camp, or research the social customs of local cultures, it all comes down to trusting your source of information.

There is no reason for the net traveler to worry, as long as he hangs on to his Nokia 9210i Communicator, providing him with full Internet access and a video player. Check out some of his toughest moments - you'll never settle for anything less.

Mobile and broadband streaming

Streaming can be broadly defined as delivering multimedia content, such as video and audio, over the Internet. The end user's terminal is usually a PC with media player software, although many new home entertainment devices capable of receiving streams have also started to appear. As opposed to downloading, streaming allows the playback of media files before the entire file is received by the user. Streaming requires special servers that deliver the streams to users using protocols such as IP multicast, UDP, and RTSP, whereas standard Web servers and the HTTP protocol can be used for downloading. The biggest Internet streaming technology providers are RealNetworks and Microsoft.

By and large, mobile streaming means the delivery of multimedia content over mobile networks such as GSM, GPRS and W-CDMA. The end terminal is a mobile device, such as a PDA or smart phone that is capable of receiving streams. The streaming servers can be same that are used for the Internet streaming and therefore technology from RealNetworks, Microsoft and Apple can be used. Other companies, focusing only on mobile streaming, include PacketVideo, Emblaze, and Philips. Since mobile devices have small screens and data bandwidths in current mobile networks are rather limited, the content must be compressed much more than with delivery for PCs connected to the broadband Internet. Also, network delays and reliability issues cause more challenges than with pure Internet streaming.

A Review of Video Streaming over the Internet

Ideally, video and audio are streamed across the Internet from the server to the client in response to a client request for a Web page containing embedded videos. The client plays the incoming multimedia stream in real time as the data is received. Quite a few video streamers are starting to appear and many pseudo-streaming technologies and other potential solutions are also in the pipeline. Generally streaming video solutions may work on a closed-loop intranet, but for mass-market Internet use, they're simply dysfunctional. However current transport protocol, codec and scalability research will eventually make video on the Web a practical reality. Below we have reviewed the currently available commercial products which purport to provide video streaming capabilities over the Internet and outlined their limitations. Then we describe the major research projects curently underway, which are attempting to solve some of these limitations. Finally we compare and evaluate the SuperNOVA project with respect to other research projects and the current commercial products.

Streaming Video


A one-way video transmission over a data network. It is widely used on the Web as well as private intranets to deliver video on demand or a video broadcast. Unlike movie files (MPG, AVI, etc.) that are played after they are downloaded, streaming video is played within a few seconds of requesting it, and the data is not stored permanently in the computer.

If the streaming video is broadcast live, then it may be called "realtime video." However, technically, realtime means no delays, and there is a built-in delay in streaming video.

It's Already in the Buffer

Watching momentary blips in video are annoying, and the only way to compensate for that over an erratic network such as the Internet is to get some of the video data into the computer before you start watching it. In streaming video, both the client and server cooperate for uninterrupted motion. The client side stores a few seconds of video in a before before it starts sending it to the screen and speakers. Throughout the session, it continues to receive video data ahead of time.

Videoconferencing Is More Demanding

Videoconferencing is more taxing on the network than streaming video. It requires realtime, two-way transmission with sufficient bandwidth for video coming in and going out at the same time without being able to buffer any of it.

Streaming Media


Streaming Audio

A one-way audio transmission over a data network. It is widely used on the Web as well as private intranets to deliver audio on demand or an audio broadcast (Internet radio). Unlike sound files (WAV, MP3, etc.) that are played after they are downloaded, streaming audio is played within a few seconds of requesting it, and the data is not stored permanently in the computer.

If the streaming audio is broadcast live, then it may be called "realtime audio." However, technically, realtime means no delays, and there is a built-in delay in streaming audio.

It's Already in the Buffer

Listening to momentary blips in music or a conversation is annoying, and the only way to compensate for that over an erratic network such as the Internet is to get some of the audio data into the computer before you start listening to it. In streaming audio, both the client and server cooperate for uninterrupted sound. The client side stores a few seconds of sound in a buffer before it starts sending it to the speakers. Throughout the session, it continues to receive audio data ahead of time.

VoIP Is More Demanding

Voice over IP (VoIP) is more taxing on the network than streaming audio. It requires realtime, two-way transmission with sufficient bandwidth for audio coming in and going out at the same time without being able to buffer any of it.

Streaming Media

Streaming media is media that is consumed (read, heard, viewed) while it is being delivered. Streaming is more a property of the delivery system than the media itself. The distinction is usually applied to media that is distributed over computer networks; most other delivery systems are either inherently streaming (radio, television) or inherently non-streaming (books, video cassettes, audio CDs).

The word "stream" is also used as a verb, meaning to deliver streaming media.

The remainder of this article discusses technology for streaming media over computer networks.


Attempts to display media on computers date back to the earliest days of computing, in the mid-20th century. However, little progress was made for several decades, due primarily to the high cost and limited capabilities of computer hardware.

Academic experiments in the 1970s proved out the basic concepts and feasibility of streaming media on computers.

During the 1980s, consumer-grade computers became powerful enough to display media. The primary technical issues were:

* having enough CPU power and bus bandwidth to support the required data rates
* creating low-latency interrupt paths in the OS to prevent underrun

However, computer networks were still limited, and media was usually delivered over non-streaming channels. In the 1990s, CD-ROMs became the most prevalent method of media distribution to computers.

The late 1990s saw:

* greater network bandwidth, especially in the last mile
* increased access to networks, especially the Internet
* use of standard protocols and formats, such as TCP/IP, HTTP, and HTML
* commercialization of the Internet

These advances in computer networking combined with powerful home computers and modern operating systems to make streaming media practical and affordable for ordinary consumers.


A streaming media system is made of many interacting technologies. Video cameras and audio recorders create raw media. Editors use composition tools to combine raw media into a finished work. Servers store media and make it available to many people. Clients retrieve media from servers and display it to the user. Servers and clients store media in various file formats; they send and receive it in various stream formats.

Servers and clients communicate over computer networks, using agreed upon network protocols. Servers encode media into a stream for transmission; clients receive the stream and decode it for display. codecs perform the encoding and decoding.

Media is big. Under current (2005) technology, media storage and transmission costs are still significant; therefore, media is often compressed for storage or streaming.

A media stream can be on demand or live. On demand streams are stored on a server for a long period of time, and are available to be transmitted at a user's request. Live streams are only available at one particular time, as in a video stream of a live sporting event.

Protocol issues

Designing a network protocol to support streaming media raises many issues.

Datagram protocols, such as the User Datagram Procotol (UDP), send the media stream as a series of small packets, called datagrams. This is simple and efficient; however, packets are liable to be lost or corrupted in transit. Depending on the protocol and the extent of the loss, the client may be able to recover the data with error correction techniques, may interpolate over the missing data, or may suffer a dropout.

The Real-time Transport Protocol (RTP), the Real Time Streaming Protocol (RTSP) and the Real Time Control Protocol (RTCP) were specifically designed to stream media over the network. They are all built on top of UDP.

Reliable protocols, such as the Transmission Control Procotol (TCP), guarantee correct delivery of each bit in the media stream. However, they accomplish this with a system of timeouts and retries, which makes them more complex to implement. It also means that when there is data loss on the network, the media stream stalls while the protocol handlers detect the loss and retransmit the missing data. Clients can minimize the effect of this by buffering data for display.

Another issue is that firewalls are more likely to block UDP-based protocols than TCP-based protocols.

Unicast protocols send a separate copy of the media stream from the server to each client. This is simple, but can lead to massive duplication of data on the network. Multicast protocols undertake to send only one copy of the media stream over any given network connection, i.e. along the path between any two network routers. This is a more efficient use of network capacity, but it is much more complex to implement. Furthermore, multicast protocols must be implemented in the network routers, as well as the servers.

As of 2005, most routers on the Internet do not support multicast protocols, and many firewalls block them. Multicast is most practical for organizations that run their own networks, such as universities and corporations. Since they buy their own routers and run their own network links, they can decide if the cost and effort of supporting a multicast protocol is justified by the resulting bandwidth savings.

Peer-to-peer (P2P) protocols arrange for media to be sent from clients that already have it to clients that do not. This prevents the server and its network connections from becoming a bottleneck. However, it raises technical, performance, quality, business, and legal issues.

Newer camcorders stream video to a computer over a FireWire connection. This uses a system of time-based reservations to ensure throughput, and can be received by multiple clients at once.

Social and legal issues

Some streaming broadcasters use streaming systems that interfere with the ability to record streams for later playback, either inadvertently through poor choice of streaming protocol or deliberately because they believe it is to their advantage to do so. Broadcasters may be concerned that copies will result in lost sales or that consumers may skip commercials. Whether users have the ability and the right to record streams has become a significant issue in the application of law to cyberspace.

In principle, there is no way to prevent a user from recording a media stream that has been delivered to their computer. Thus, the efforts of broadcasters to prevent this consist of making it inconvenient, or illegal, or both.

Broadcasters can make it inconvenient to record a stream, for example, by using unpublished data formats or by encrypting the stream. Of course, data formats can be reverse engineered, and encrypted streams must be decrypted with a key that resides—somewhere—on the consumer's computer, so these measures are security through obscurity, at best.

Efforts to make it illegal to record a stream may rely on copyrights, patents, license agreements, or—in the United States—the DMCA.

Real-time Transport Protocol


The Real-time Transport Protocol (or RTP) defines a standardized packet format for delivering audio and video over the Internet. It was developed by the Audio-Video Transport Working Group of the IETF and published in 1996 as RFC 1889 (

It was originally designed as a multicast protocol, but has since been applied in many unicast applications. It is frequently used in streaming media systems (in conjunction with RTSP) as well as videoconferencing and push to talk systems (in conjunction with H.323 or SIP), making it the technical foundation of the Voice over IP industry. It goes along with the RTP Control Protocol (RTCP) and it's built on top of User Datagram Protocol (in OSI model).

RTP was also published by the ITU-T as H.225.0, but later removed once the IETF had a stable standards-track RFC published. It exists as an Internet Standard (STD 64) defined in RFC 3550 (which obsoletes RFC 1889). RFC 3551 (STD 65) (which obsoletes RFC 1890) defines a specific profile for Audio and Video Conferences with Minimal Control.

Secure Scalable Video Streaming For Wireless Networks Susie J. Wee

Conventional Approaches to Secure Video Streaming

This section discusses two conventional approaches for secure video streaming. One, a secure video streaming system that uses application-level encryption. The video is first encoded into a bitstream using interframe compression algorithms such as MPEG or H.263 or intraframe compression algorithms such as JPEG or JPEG2000. The resulting bitstream is encrypted, and the resulting encrypted stream is packetized and transmitted over the network using a transport protocol such as UDP. The difficulty with this approach occurs when a packet is lost. Specifically, error recovery is difficult because without the data from the lost packet, decryption and/or decoding may be difficult if not impossible.

Another approach is a secure video streaming system that uses network-level encryption. This system can use the same video compression algorithms as the previous system. However, in this system the packetization can be performed in a manner that considers the content of the coded video and thus results in better error recovery, a concept known to the networking community as application-level framing. For example, a common approach is to use MPEG compression with the RTP transport protocol which is built on UDP. RTP provides streaming parameters such as time stamps and suggests methods for packetizing MPEG payload data o ease error recovery in the case of lost or delayed packets.

Both these approaches are secure in that they transport the video data in encrypted form. However, if network transcoding was needed, it would have to be performed with a method. The transcoding operation is a decrypt, decode, process, re-encode, and re-encrypt process. The computational requirements of this operation can be reduced by incorporating efficient transcoding algorithms in place of the decode, process, and re-encode modules. However, even improved transcoding algorithms have computational requirements that are not well-suited for transcoding many streams in a network node. Furthermore, a more critical drawback stems from the basic need to decrypt the stream for every transcoding operation. Each time the stream is decrypted, it opens another possible attack point and thus increases the vulnerability of the system. Thus, each transcoder further threatens the security of the overall system.

Click Here Read the Whole Article

Research and Design of a Mobile Streaming Media Content Delivery Network


Delivering media to large numbers of mobile users presents challenges due to the stringent requirements of streaming media, mobility, wireless, and scaling to support large numbers of users. This paper presents a Mobile Streaming Media Content Delivery Network (MSM- CDN), designed to overcome these challenges.

PDF Format Click here

Secure Media Streaming & Secure Adaptation for Non-Scalable Video


Two important capabilities in media streaming are (1) adapting the media for the time-varying available network bandwidth and diverse client capabilities, and (2) protecting the security of the media. Providing both end-to-end security and adapting at a (potentially untrusted) sender or mid-network node or proxy can be solved via a framework called Secure Scalable Streaming (SSS) which provides the ability to transcode the content without requiring decryption. In addition, this enables secure transcoding to be performed in a R-D optimized manner. The original SSS work was performed for scalably coded media. This paper examines its potential application to non-scalable media.