Saturday, 23 February 2008

Memory camera implated into a mouse by japanese researchers

http://uk.reuters.com/article/healthNews/idUKT7718020080131

Working with researchers at Kinki University, Ohta implanted the special semiconductor camera inside the hippocampus of the mouse's brain, designing the devise so that a screen showed blue light whenever the camera captured memory being recorded by the brain.

The researchers injected the mouse with a substance that lights up whenever there is brain activity. The camera then captures that light and the visuals come up on a screen.

Tuesday, 19 February 2008

What is Compression?

Taken from http://en.wikipedia.org/wiki/Image_compression

Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
A chart showing the relative quality of various jpg settings and also compares saving a file as a jpg normally and using a "save for web" technique
A chart showing the relative quality of various jpg settings and also compares saving a file as a jpg normally and using a "save for web" technique

Image compression can be lossy or lossless. Lossless compression is sometimes preferred for artificial images such as technical drawings, icons or comics. This is because lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossless compression methods may also be preferred for high value content, such as medical imagery or image scans made for archival purposes. Lossy methods are especially suitable for natural images such as photos in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate.

Methods for lossless image compression are:

* Run-length encoding – used as default method in PCX and as one of possible in BMP, TGA, TIFF
* Entropy coding
* Adaptive dictionary algorithms such as LZW – used in GIF and TIFF
* Deflation – used in PNG, MNG and TIFF

Methods for lossy compression:

* Reducing the color space to the most common colors in the image. The selected colors are specified in the color palette in the header of the compressed image. Each pixel just references the index of a color in the color palette. This method can be combined with dithering to avoid posterization.
* Chroma subsampling. This takes advantage of the fact that the eye perceives brightness more sharply than color, by dropping half or more of the chrominance information in the image.
* Transform coding. This is the most commonly used method. A Fourier-related transform such as DCT or the wavelet transform are applied, followed by quantization and entropy coding.
* Fractal compression.

The best image quality at a given bit-rate (or compression rate) is the main goal of image compression. However, there are other important properties of image compression schemes

Google Video of MIT lecture about the mathmatics of compression to understand how it works.



Although i dont understand the mathermatics too well, i understand that the most image compression split the image into smaller sections then looks for matching pixels so that pixels similar in colour to the next pixel become the same colour to keep the byte's low, using a set series of vectors.

The whole process is very mathermatical and code orientated. There are multiple compression methords for images, audio and video(motion). Examples being, gif,jpg,png,tiff,mpg1,mpg2,mpg4 ect ect

Here is another google lecture video which focus's more on the Lossy Image Compression technique. It explains again about the mathermatics of compression but it is more understandble as it isn't as maths heavy as the other lecture.



The downside of compression, if the goal is to achive the raw image as a compresion, is that 'compression artifacts' occur after compression. The results increase due to the agressiveness of compression. Algorithms fail and the loosy data compression methords error and this causes glitch, and artifacts.

taken from http://en.wikipedia.org/wiki/Compression_artifact

Compression artifacts occur in many common media such as DVDs, common computer file formats such as JPEG, MP3, or MPEG files, and Sony's ATRAC compression algorithm. Uncompressed media (such as on Laserdiscs, Audio CDs, and WAV files) or losslessly compressed media (FLAC, PNG, etc.) do not suffer from compression artifacts unless they were encoded from a compressed source.

Where motion prediction is used, as in MPEG-2 or MPEG-4, compression artifacts tend to move with the optic flow of the image, leading to a peculiar effect, part way between a painting effect and "grime" that moves with objects in the scene.

Errors in the bit-stream can lead to errors similar to large quantization errors, or can disrupt the parsing of the data stream entirely for a short time, leading to "break-up" of the picture. Where gross errors have occurred in the bit-stream, it is not unknown for decoders to continue to apply 'painting' updates to the damaged picture, creating "ghost image" effects.

To stop the build-up of compression artifacts, most compression systems occasionally send an entire compressed frame without prediction or differencing, usually at the start of a shot and at regular intervals thereafter. In MPEG picture coding, these are known as "I-frames", with the 'I' standing for "intraframe compression."


Video compression is slightly differant as it the algorythmes looks for changes from one frame to the next, and only changes the pixels that have moved slightly, reducing the number of pixels that change in each image.

Blogg documents cideo coding fundamentals, includes an introduction and looks at codec's.
http://videocodecs.blogspot.com/2007/05/image-coding-fundamentals_08.html

Since video can be viewed as a sequence of pictures, video coding can be seen as an extension to image compression. Compression is the process of compacting data, reducing the number of bits. With eye specific features and removal of redundant data we can achieve compression. Compression involves a complementary pair of systems, a compressor (enCOder) and a decompressor (DECoder) and hence the name CODEC, the system performs that performs encoding & decoding.

http://en.wikipedia.org/wiki/Video_compression

Most video compression is lossy, i.e. it operates on the premise that much of the data present before compression is not necessary for achieving good perceptual quality. For example, DVDs use a video coding standard called MPEG-2 that can compress ~2 hours of video data by 15 to 30 times while still producing a picture quality that is generally considered high quality for standard-definition video. Video compression, like data compression, is a tradeoff between disk space, video quality and the cost of hardware required to decompress the video in a reasonable time. However, if the video is overcompressed in a lossy manner, visible (and sometimes distracting) artifacts can appear.

Video is basically a three-dimensional array of color pixels. Two dimensions serve as spatial (horizontal and vertical) directions of the moving pictures, and one dimension represents the time domain. A data frame is a set of all pixels that correspond to a single point in time. Basically, a frame is the same as a still picture. (These are sometimes made up of fields. See interlace.)

Video data contains spatial and temporal redundancy. Similarities can thus be encoded by merely registering differences within a frame (spatial) and/or between frames (temporal). Spatial encoding is performed by taking advantage of the fact that the human eye is unable to distinguish small differences in color as easily as it can changes in brightness and so very similar areas of color can be "averaged out" in a similar way to jpeg images (JPEG image compression FAQ, part 1/2). With temporal compression only the changes from one frame to the next are encoded as often a large number of the pixels will be the same on a series of frames.

The Trip (adding narative)

Knowing I was going to use video, photography and audio I need to add to narative to give some direction to the experiments i was wanting to do. Whilst recording along my route home I relised it was a perfect location to add some narrative to my work. With lots of movement and activity.

I do not have a DV cam or anything which can produce high resolution full framerate video so just recorded sections on my w850i which has about 7fps 640x480 video. Looking at the results I relized they were really bad, peoples faces were not visable, signs were unreadable and just genrealy the quality was awful. The reason for this was the compression of the video as the format is g3p. and also the quality of the lens and processing.

I discovered a link between loss of digital data and the loss of data in human memory.
I would like to explore this idea of compression more, looking at ways I can show information loss in an image.


View Larger Map

Here is the route i will take, drawn on google maps. Each placemark has a description of the surroundings, and why it is a nice idea to film there.

When filming my first trip in segments, something nice happened which made me relise that I really wanted to capture the city following, as it happeneds, and that storyboarding cannot happen in such detail as event are unplanned.

If you can see from the awful quality that a police man runs past me, followed my a police car. The audio adds more drama to the scene than the video alone.



I think its quite nice how the scene starts out quite slow paces, and you hear footsteps of women walking along side me, and the cars, and just general hum of the city, then the sirent rings on over the sound and you see this florence shape sorta fly past, which adds movement and the scene become quite dramatic, then we are alerted to the noise itself as the car passes.