Thursday 20 June 2013

Data Traffic Increases

Ericsson's Mobility Report June 2013



Ericsson released the latest version of its quarterly Mobility Report earlier this month, showing that mobile data traffic is growing fast, and video is the reason for the surge.

Ericsson estimates that data traffic will grow by 12 times by 2018. Chief reasons for the increase are the amount of online video available and improved network speeds thanks to HSPA and LTE development.

Video traffic alone is growing by 60 percent each year, Ericsson notes. As network speeds increase, mobile users are rushing to take advantage of them. Video is the largest type of data traffic. Some networks average 2.6GB of video data per subscription per month. This agrees with the latest Cisco VNI.

Video might be the biggest data hog, but social networking is the biggest time hog. Mobile customers spend more time on social networks than they do streaming videos. On some networks, subscribers spend up to 85 minutes per day on social networks. 


Ericsson's report also looks at the growth of smartphones, finding that half of mobile phone sales in the first quarter of 2013 were for smartphones. That compares to 40 percent of phone sales for 2012. Global mobile phone subscriptions grew by 8 percent in the first quarter of 2013.



Ericsson Qtr report and archives can be located here - Ericsson mobility report




Facebook Attempting to Tear Down Vine




So Facebook purchased Instagram with the plan to develop this feature of recording 15 second shorts that can be posted and shared to all..... No they didn't.... Did they?

In my opinion FB have suddenly woken up and seen the money go elsewhere, namely Vine. Cue late response from FB.

Vine, which Twitter launched in October of 2012 (not so long ago !! ) has 13 million users. More importantly the money is using Vine, reportedly 50,000 brands are using Vine to reach the audiences.




Copy Cat !

Last week Facebook rolled out support for hashtags, another feature pioneered on Twitter. Like video, hashtags have been a crucial element of Twitter's advertising business. It's a way to focus attention around big events like the Superbowl and the Oscars. Hashtags organize the real-time conversation and create an anchor for people to connect to trending topics and campaigns. And that's what marketers want, to be part of the big discussions.


Compare this to Facebook's approach to advertising, which has been all about targeting based on your personal data (Shut up Slave ! we own you now, approach adopted by FB) and social connections. The company made a big pitch to advertisers that it would create a new type of marketing where the words and photos shared by its users would become native ad units. The one most tied to Facebook's DNA was "Sponsored Stories".

As it turns out, people hated this. Facebook's ad model converts you into an unwitting pitchman for a product you may or may not love.

Stop SPAMMING me .....

Sometimes this means you spam your friends and family over and over with the same post, which has been purchased and promoted to the top of their news feed. Other people become unwilling mascots for large quantities of sexual lubricant they never purchased or intended to promote.

The Good News

Facebook seems to have recognized the problem. Just two weeks ago, it eliminated more than half its advertising products. Sponsored stories got the axe. The idea behind them is still being integrated into all of Facebook's marketing efforts, but it's clear the company is turning towards a more conventional approach.

Video could be a game-changer for Facebook. Advertisers are willing to pay much higher prices to show consumers a short clip than they would for a display ad, no matter how tricked out it is with social context. When you combine video with real-time conversation and massive audiences, you get a form of broadcast advertising.The biggest slice of the traditional advertising pie that companies like Facebook and Twitter can take from is the $80 billion annual TV spend.



I told you already !

Twitter has already begun to form deep partnerships with big players in television like ESPN, A&E, and Bloomberg See my recent post TV and Twitter,

Hashtags are a becoming a major part of promoting live events. At a recent event in New York it showed off the integration of sports highlights and Vine.

All aboard ....

The recent announcements from Facebook show it recognizes that leveraging personal and social data often makes for uncomfortable advertising. With the addition of hashtags and video, Facebook is gearing up to take on Twitter in the big battle for real-time, broadcast style advertising dollars instead. Brands are already jumping on board.

Tuesday 18 June 2013

Bigger is better... some say

Sharp launches a new 90" Aquos LED model. Now the largest TV on sale in Europe.



Sharp has announced the 90-inch LC-L90LE757 LED TV, which is the biggest commercially available LED TV set on sale in Europe. 

The 3D set is over 1.2 metres tall and uses a cutting edge Xgen 3D panel.

The television is of course Full HD, with a 1080p panel capable of running at 200Hz, which should mean you get incredibly detailed and clean looking action. Sharp is also keen to push the Smart TV functionality with its 90-inch set, so the you get full access to the Aquos Net+ portal, including apps like Skype and web browsing.

You can control the television set using Sharp's own Android and iOS application, which allows you to use your smartphone to browse the internet on the TV as well as a standard remote control.

The whole television is finished in aluminium, sitting at 2 metres long and weighing 64Kg, it's wall-mountable. 

Try and carry this unit on your own !!! 64Kg??? need to strengthen that wall also ! Actually you may need to make a hole in the wall to get the unit in the room in the first place.

Given its size, the set will dominate any room. As such Sharp has included a wallpaper mode that will use the set as a digital photo frame when in standby.

The 90-inch LED set is available now priced in at £12,000. That's fairly decent value when you think Samsung's 84-inch set retails for around $24,000 in the US.



Friday 14 June 2013

Online Video More popular Than Social Media by 2017

Do you believe that online video sites will be more popular than the social media sites of today? 


Well, according to reports from Cisco in 4 years that will be the case. As currently video online is growing at a faster rate than social media was at the same stage in its development.

Cisco's new Visual Networking Index or VNI provides the stats in their forecast. From this report Cisco state:


  • By 2017, there will be 3.6 billion global Internet users, up from 2.3 billion global Internet users in 2012
  • By 2017, there will be 19 billion networked devices globally, up from 12 billion networked devices in 2012
  • By 2017, average global broadband speed will grow 3.5-fold, from 11.3 Mbps (2012) to 39 Mbps (2017)
  • By 2017, global IP traffic will reach an annual run rate of 1.4 zettabytes, up from 523 exabytes in 2012


What is a zettabyte and a exabyte?
Zetta is the seventh power of 1000. So a Zettabyte or ZB is 10 to the power of 21.
1 ZB = 1000000000000000000000bytes = 10007bytes = 1021bytes = 1000exabytes.
Exa is the sixth power of 1000. So a Exabyte or EB is 10 to the power of 18.
1 EB = 1000000000000000000B = 1018bytes = 1000petabytes

Cisco also state in the newest edition of Cisco’s data-heavy report on how we all spend our time and bandwidth
Online video services on the other hand had just around 1 billion users worldwide in 2012, according

Scary thought ! That’s more IP traffic than the internet has seen in the last 18 years together.

Online video will account for 69 percent of consumer internet traffic by 2017 (up from 57 percent in 2012).
points to social networking as the world’s most popular type of consumer service, with 1.2 billion
users worldwide tweeting, Facebooking and more around the world in 2012. That’s 66 percent of residential
internet users, if you need to know. Cisco estimates that this number will grow to 1.73 billion users by 2017,
which will then represent around 70 percent of the also-growing internet population.

The company estimates that this number will almost double by 2017, reaching close to 2 billion
users worldwide. That means that in four years, 81 percent of the world’s internet users will also use online
video services. In 2012, that number was still at around 58 percent.

All of those video streams will also have a major impact on bandwidth consumption. Cisco estimates that
Mobile video will grow 16-fold from 2012 to 2017, and account for 66 percent of all mobile data traffic during that year.Internet-to-TV streaming will grow from 1.3 exabytes per month in 2012 to 6.5 exabytes per month in 2017.

The number of web-enabled TVs in consumer’s homes will grow from close to 180 million in 2012 to 827 million in 2017.Game consoles will become slightly less important as a way to bring internet video to the TV screen, while dedicated streaming boxes will see the biggest growth:






Thursday 13 June 2013

Did the BBC lie?

This morning i awoke and read my weekly register update.

One story in particular caught my attention written by Andrew Orlowski, as I covered this on an earlier post - 'BBC Fails to complete DMI Project.'

Quick recap, new BBC DG Tony Hall scrapped the project after 4 years and wasting £100m of public money. The project was intended to modernize the BBC storage facilities, digitizing  all footage and other material.


Did the BBC lie to Parliament? 

Shockingly it appears they did. According to reports the BBC lied to Parliament on the progress and state of the project to the NAO - National Audit Office in 2011.

Taking extracts from Andrews piece which can be found at http://www.theregister.co.uk/2013/06/12/bbc_dmi_digital_media_initiative/

These statements really hit home for me and highlighted that this entire project called the 'The Strategic DMI', short for Digital Media Initiative, was a complete mess.

Who lied? in short the BBC Trust. The internal Beeb body with the statutory responsibility to represent TV licence fee-payers’ interests, the BBC Trust, failed to provide oversight and encouraged the expansion of the Utopian project, which became a kind of evangelical mission; it is a failing that raises doubts about the current arrangement of “internal arm's-length self-regulation” at the BBC.

Astonishingly, in 2011 the BBC Trust urged that scrutiny of the DMI should look beyond “narrow financial thresholds”, and urged conventional cost-benefit analysis be thrown away or expanded to include new (and intangible) Kumbaya benefits, such as enabling people to do "mash-ups" and third-party access to the BBC archive. The watchdog had gone feral.

This is potentially far more damaging than some recent Auntie scandals. Former BBC Director-General Mark Thompson, now CEO of the New York Times company, and former BBC Future Media chief Erik Huggers, now a corporate veep at Intel, are likely to be summoned to appear before Parliament's Public Accounts Committee to discuss the axed project.

What the NAO did.

In early 2011 the National Audit Office examined the BBC’s handling of the DMI, which started out as an effort to replace outdated tape machines with a system that provided always-on, real-time access to both archive and work-in-progress footage, viewable on any device. Bells and whistles were continually added to the IT project, such as the ability to provide that real-time access to third parties and make the BBC’s archive accessible for kids making mash-ups.

“I believe it will end up saving the BBC and licence payers very significant amounts of money. But we are also going to share this technology with key independent suppliers for the BBC and with other public bodies,” former DG Mark Thompson told the accounts committee in 2011.

“So the idea is to try and get as much value out of this investment as possible. So that's not just the BBC, but independent, commercial companies in this country, and also other public bodies, get the same kind of state-of-the-art ways of manipulating content.”

This evangelism was applauded by the BBC Trust. Senior BBC executives regarded the project as so “strategic” it was one of seven “untouchable” initiatives, on a par with the broadcaster's move to a £1bn production facility up north in Salford. Yet with milestones being missed, staff were obliged to buy new tape machines anyway.

“A number of releases have been successfully delivered and initial feedback from users has been very positive,” the trust gushed in its response to the audit office’s report on the DMI in 2011.

In 2011 the BBC Trust declared:

Since bringing the project in-house, the Trust has been satisfied with the progress of the project. We note the NAO’s [National Audit Office's] recognition that the BBC has now started to deliver the DMI system and that users have been positive about the elements delivered. In the context of the DMI being a complex and cutting edge IT project, the Trust considers this is something of which the BBC should be proud.

There then follows an even more intriguing bit:

We consider that the referral thresholds as currently drafted are clear and have worked well to date, ensuring an appropriate balance such that the Trust is involved in strategic rather than the more routine operational decisions. However, we note the NAO’s comments that these are narrow financial thresholds. We agree that we should review these referral criteria in light of the NAO’s comments and consider expanding these to include significant changes to the cost-benefit of a project. The Trust will consider how best to implement this recommendation.

In other words, leave us alone: we’re on a divine mission, one that you accountants can’t measure in pounds and pennies.

But the chairman of Parliament's Public Accounts Committee, Margaret Hodge, said these glowing impressions were based on fiction. At this week's hearing in Salford, she told the BBC's chief financial officer Zarin Patel and Beeb trustee Anthony Fry:

Both the NAO report and the evidence from the BBC suggested that DMI was delivering in practice. What the letter from [whistleblower] Bill Garrett goes on to say is that the reality is at odds with the representation given in the NAO report … So Mr Thompson told us that things were already being used because of this great agile project; then Mr Garrett told us that was not true.
Therefore the evidence given to us was not correct at that time and had you [the BBC] given us the correct evidence we might have come to a very different view to the one we came to when we looked at this.

Final thought the UK is arguably the most tech-savvy nation on the planet. To anyone puzzled as to why the trust apparently abandoned licence fee-payers and went native, the answer should be found in the ambiguities of the body's 2006 Royal Charter -PDF.

All comments on this topic are greatly received.

James






Wednesday 12 June 2013

Networking 101 The OSI Model

From time to time I like to publish an educational blog. In today's edition we are going to review the OSI model.

The OSI model, is made up of 7 layers. OSI stands for Open Systems Interconnection.

This model is something that all network engineers should know off by heart.

I am CCNA level accredited (note it lapses and I need to re-qualify.)

In today's blog lets take a look at the OSI model and the 7 layers.










The OSI model is used to describe the function and purpose of the various elements in network communications.

In theory, data is received by the protocol stack into layer 7, the application layer, from software. The received data is labeled as a service data unit (SDU). Each layer adds its own layer-specific header to the SDU, thus creating  a payload data unit (PDU). The PDU is then passed down to the next layer below, where it becomes the SDU of that layer. This process of traversing down the layer stack is known as encapsulation.

Once layer 1, the physical layer, receives the PDU from layer 2, the data link layer, the data is transmitted over the network medium (i.e., twisted pair cable, fiber optic cable, or wireless).

When a network interface receives a signal of data from the network medium, it processes the PDU in reverse. This reverse unpacking process is known as de-encapsulation. Each layer reads its corresponding header of the  PDU, processes and removes the header from the PDU, creating an SDU, then passes the SDU to the layer above. This is repeated until layer 7, the application layer, receives its PDU and passes the actual data to software.

While the generic term for a header and payload data for a given layer is known as a PDU, some layers and/or protocols have unique names for this structure. These include:
• 4 - Transport - TCP - segment
• 4 - Transport - UDP - datagram
• 3 - Network - packet or datagram
• 2 - Data link - Frame

There are two additional oddities to the process of encapsulation. The first is that the data link layer often adds a header and footer to the SDU to create its PDU. For example, the most common data link layer technology is  Ethernet. Ethernet adds a header containing the destination MAC address, source MAC address, and EtherType designation (i.e., identification of the type of payload), while the footer includes a checksum value to perform integrity  checks. The second is that most layer 1 physical layer technologies do not add any headers (or footers) to the PDU from layer 2. Instead, start and stop delimiter bits might be used on asynchronous communication (i.e., not time-synched), but these are not considered to be part of the PDU, just part of the transmission technology.

The header added by a layer is configured to include information relevant to the same layer on the receiving
system. This layer-to-layer communication via header (and footer) is known as peer layer communications. It is essential that headers (and payload) arrive uncorrupted and are unpacked (i.e., de-encapsulated) in the correct  order. Fortunately, communication errors are rare, intentional corruption is detectable, and most systems use standardized protocol stacks, such as TCP/IP, thus peer-layer communication is generally flawless.

Layer 7, the application layer, is the interface between the protocol stack and application software. The software might be client utilities or server services. It is the ability of software to communicate with the standardized interface of application layer protocols that makes network communications possible. In fact, the use of common application layer protocols allows for fully interoperable computer communications. The application layer is assigned the responsibility to check whether a remote communication partner is available, confirm communications with that partner are possible, and evaluate whether or not there are sufficient resources to maintain a communication.

It is important to remember that software, like a Web browser or an e-mail server, is not part of the application layer; rather, they communicate with protocols in the application layer. Most applications have a specific protocol designed around their data types, functions, and features. These include many of the most commonly used application protocols such as:

• HTTP - hypertext transfer protocol
• FTP - file transfer protocol
• SMTP - simple mail transfer protocol
• POP3 - post office protocol
• IMAP - internet message access protocol
• DNS - domain name service
• Telnet

The presentation layer, or layer 6, establishes context between disparate application layer protocols. Effectively, the presentation layer adjusts syntax, semantics, data types, data formats, etc. This layer ensures that data sent by an application is compatible with the lower layers of network communication, and that data received from the network is acceptable to the receiving software.

The session layer, or layer 5, manages the connections between computers. Connection management includes establishing, maintaining, and terminating the links between networked systems. This layer provides for full-duplex, half-duplex, and simplex communications (i.e., two-way simultaneous, two-way one-way-at-a-time, and one direction only). This layer also proves checkpoints or verification of delivered data, link recovery or re-establishment, and a graceful disconnect process.

The transport layer, or layer 4, manages the integrity of a connection and controls the connection (at least as far as the connection or session is not being managed by the session layer). This layer potentially manages multiple connections simultaneously (often using a logical addressing scheme of ports on top of the network layer logical addresses). This layer defines the boundaries or rules of a connection, such as the size of data in each PDU, segmenting, sequencing, error checking, and how to determine if a PDU has been lost (or was not delivered). Many of these rules are defined uniquely for each connection during a session establishment handshake process. Two protocols commonly recognized as operating in this layer are TCP and User Datagram Protocol (UDP).

The network layer, or layer 3, provides for logical addressing and the management of communications between devices via service known as routing. While the network layer provides routing services and attempts to deliver messages successfully, it does not guarantee delivery. This layer also includes error detection features. Internet Protocol (IP) is the most recognized protocol that operates at this layer. Currently, IPv4 is the most widely used version; however, IPv6 is quickly gaining in popularity. IPv6 got its official global Internet kick-off on June 6, 2012.

The OSI model also assigns the task of fragmentation to this layer; however, fragmentation is an often-abused feature of network layer protocols. For this reason, fragmentation is not supported in IPv6 and is generally blocked or filtered by firewalls on IPv4 connections.

The data link layer, or layer 2, formats the PDUs received from the network layer into the proper container
for network transmission. On most networks, this proper container is the Ethernet frame. This layer also takes advantage of the hardware-assigned address of the physical interface card. This address is known as the media access control (MAC) address, or hardware address, or physical address. The most common standard technologies in use at the data link layer are Ethernet (IEEE 802.3) and Wireless (IEEE 802.11). The Address Resolution Protocol (ARP) is used at this layer (or between Layer 2 and Layer 3) to convert the destination IP address (logical) into a destination MAC address (physical).

The physical layer, or layer 1, is the interface between the logical software of the network protocol and the hardware network interface card. It is at this layer that the conversion from the binary data of the layer 2 PDU occurs into the transmission technology encoding of the message bits, such as voltage variations, light pulses, or radio wave modulations. The devices at this layer manage transmission and reception of the bits, as well as physical level synchronization, error detection, noise management, and contention media management.

Finally, whilst the OSI model is the most widely used concept employed to compare and contrast networking concepts, there is a model derived directly from the most widely used protocol. The TCP/IP model was crafted directly from  the TCP/IP protocol stack, and thus is a more true representation of the functions and operations of network protocols.

The TCP/IP model consists of only 4 layers and can be roughly mapped back to the OSI model for backwards referencing. The 4 layers of the TCP/IP model can be mapped to the OSI as follows:
• 4 - Process - OSI layers 5, 6,7
• 3 - Host-to-host - OSI layer 4
• 2 - Internetwork - OSI layer 3
• 1 - Link - OSI layers 1, 2

In spite of this being a more realistic and real-world model, it has not been adopted as a reference standard, or at least not to the extent of the OSI model. In most cases, layer references are still pointing to the OSI model.

If all of this has confused you and quite frankly became bored reading it, then check out 'Eli the Computer Guy' to demystify the OSI Model. :)

Monday 10 June 2013

Has Adobe Got Back In the Game After the Death of Flash?

So have Adobe rocked back onto the scene with this, the supposed-ed 'next generation of the online video experience' ?

Well, lets take a look back at Adobe and what happened with flash and why it was pulled. Essentially the mobile world was the death of flash. HTML5 was widely supported, in fact almost universally supported in mobile browsers and Adobe realized that Flash would never get there.

Apps made browser-based apps less necessary. "Essentially, users’ preferences to consume rich content on mobile devices via applications means that there is not as much need or demand for the Flash Player on mobile devices as there is on the desktop." - Adobe's chief of developer relations Mike Chambers.

Finally the platforms on a global scale were too fragmented. To make Flash work on mobile platforms, Adobe had to work with multiple hardware makers (Motorola, Samsung), platform companies (Google, RIM), and component manufacturers (like Nvidia). That took too much time. "This is something that we realized is simply not scalable or sustainable." - Adobe's chief of developer relations Mike Chambers.

Read more: http://www.businessinsider.com/adobe-engineer-heres-why-we-killed-flash-for-mobile-2011-11#ixzz2VrDiYXDG


So Flash is dead long live ......... Adobe Primetime!

Hang on, we did not mention it above, but the number 1 killer of Flash was Apple and there stance on not supporting flash on any Apple device. Remember those frustrating days when you couldn't watch a clip on a website as it was a Flash player! Did Apple kill Flash, or was it Steve Jobs himself? Even if HTML5 would have killed Flash in the long run, Jobs certainly accelerated the demise of Flash,

So if Applie blocked Flash what is going to happen to Primetime? Great question, basically Tim Cook is not Steve Jobs. So we are now faced with Primetime coming to IOS and Andriod in the coming months.

What is Adobe Primetime?

Adobe Primetime was built from the ground up combining many different technologies into one and leverage existing partner solutions so that broadcasters can deliver a robust, rich live streaming or HD video on-demand experience to global audiences. The video industry is undergoing a shift towards standardization and interoperability, and it is that interoperability that will catapult the volume of video available to consumers.


On top of this, Adobe’s Project Primetime is an ambitious attempt to bring broadcast TV into the multi-platform Internet age, and the company has announced several new additions to it: a MediaWeaver ad insertion tool and a new Primetime Media Player for delivering content.

“The introduction of Adobe MediaWeaver and Primetime Media Player mark a significant step in making broadcast TV content and ads work seamlessly online as more content is consumed across devices. This is a massive opportunity for the industry, and we are working closely with leading TV content owners and distributors to better deliver and monetize broadcast content across all major platforms,” said Jeremy Helfand, vice president of monetization, Adobe.

A beta of the complete Project Primetime solution is available now as an SDK for Windows, OS X, Android and iOS. Adobe first announced the project in February. The suite of tools plays a key role in the TV Everywhere authentication solution that companies like HBO and Showtime have been using.

Here is Jeremy to explain all about Primetime




At NAB 2013, Adobe collected the 'Best of NAB 2013' award, here is Adobe VP of Video Solutions Jeremy Helfand (Again!)



Video streaming by Ustream



H.265..... the next generation

H.265 or rather High Efficiency Video Coding (HEVC) is picking up the pace. It has been in development since 2004.

What does this HEVC mean for you and me. Well, this codec paves the way for higher resolutions and the ability to hold fairly reasonable mobile streaming of this material.

Samsung launched the S4 which promises to support the HEVC technology.  Expect more and more devices over the coming months to be sold supporting HEVC.




What is HEVC exactly??? 

Well if you have been ripping your Blue Ray or DVD collection in the years gone by you will have probably  used the H.264 codec (MP4) or AVC  (Advanced Video Coding). HEVC is simply the next step, H.265 will be the next codec to use. 

The ITU approved this standardization in April 2013.  The ITU oversee the 'H' series of standards which it partners with the SO/IEC Moving Picture Experts Group (MPEG).  Both the ITU and MPEG groups have given their approval.

Recommendation ITU-T H.265 represents an evolution of the existing video coding
Recommendations (ITU-T H.261, ITU-T H.262, ITU-T H.263 and ITU-T H.264) and was developed
in response to the growing need for higher compression of moving pictures for various applications
such as Internet streaming, communication, videoconferencing, digital storage media and television
broadcasting. It is also designed to enable the use of the coded video representation in a flexible
manner for a wide variety of network environments. The use of this Recommendation | International
Standard allows motion video to be manipulated as a form of computer data and to be stored on
various storage media, transmitted and received over existing and future networks and distributed on
existing and future broadcasting channels.

You can find the entire ITU pdf on H.265 @ http://www.itu.int/rec/T-REC-H.265-201304-I/en

So now manufacturers will start to make products that support the HEVC standard.


The bit rate savings and increase in picture quality is outstanding. Just take a look at the history from MPEG 2 in 1994.
Back in 2010 a committee known as JVT-VC which represented both MPEG and ITU-T members set about achieving the following brief:

H.265 has to deliver a picture of the same perceived visual quality as H.264 but using only half the transmitted volume of data and therefore half the bandwidth.

Shut up already here comes the Science.......


Like H.264 before it - and all video standards from H.261 on, HEVC is based on the same notion of spotting motion-induced differences between frames, and finding near-identical areas within a single frame. These similarities are subtracted from subsequent frames and whatever is left in each partial frame is mathematically transformed to reduce the amount of data needed to store each frame.


Source Elemental Technologies

When an H.264 frame is encoded, it’s divided into a grid of squares, known as "macroblocks" in the jargon. H.264 macroblocks were no bigger than 16 x 16 pixels, but that’s arguably too small a size for HD imagery and certainly for Ultra HD pictures, so H.265 allows block sizes to be set at up to 64 x 64 pixels, the better to detect finer differences between two given blocks.

H.265 goes further.... H.264 encoders check intra picture similarities in eight possible directions from the source block. H.265 extends this to 33 possible vectors - the better to reveal more subtle pixel block movement. There are also other tricks that H.265 gains to predict more accurately where to duplicate a given block to generate the final frame.

To read more about H.265 I suggest you read the report published by Tony Smith at the The Register. It makes for a good read over a cup tea the Register







Thursday 6 June 2013

ASUS Ultra HD



Would you believe me if I said the above picture is actually a still taken from footage of an ASUS prototype Ultra HD screen, 39"?  No? Well you should because it is. 

4K is coming folks and there is nothing you can do to stop it. 3D has been and gone, the fad is over. HD needs to be improved upon, so we all welcome 4K. Although this may shorter lived than HD as people are already talking about 8K.

But for now, we are seeing the start of manufacturers produce products through out the broadcast chain with 4K in mind. This latest consumer unit, be it a prototype from ASUS, is a sign of things to come. 

The picture quality is outstanding. Thanks to Manuel Ángel Méndez we can view his HD you tube clip showing off the stunning ASUS ultra HD.



With its huge 39-inch screen size. Spotted at the ASUS Computex booth, the resolution packing 3,840 x 2,160 pixels. However, it's currently still in prototype stage and will not be ready until early next year.

Plus the BBC and Sony will be recording and broadcasting Wimbledon this year in 4K, albeit just a test.

In other 4K news Samsung has announced a new range of 4K TVs that will be launched in South Korea next month.

A 55-inch model will be available for about £3715, US $5659 and the 65-inch version for about £5166, US $7869.

What is 4K exactly?

Ok for everyone who is still slightly confused as to what 4K is, here is a brief overview.

4K will sometimes be referred to as 'Ultra HD'. 4K is 4 times the resolution of HD.

HD or rather full HD 1080P is 1080 pixels in height with progressive scan. Plus assuming we have a 16:9 aspect ratio that would be 1920 pixels in width, or 1920x1080.

Now 4K has a resolution of 3840 pixels in width (2x1920) and a height of 2160 pixels (2x1080).

Wednesday 5 June 2013

Windows Blue

Windows Blue as it has been code named is Microsoft’s first major upgrade to the Windows 8 desktop operating system.

The launch of 8.1 will mark a change in strategy for Microsoft. Instead of releasing new editions of Windows after lengthy development cycles, going forward Redmond is expected to refresh its OS on a yearly basis.

Rather than the usual small incremental changes that you and I sometimes see with the OS upgrades, Microsoft clearly had their serious hat on when developing the Windows 8.1 update. In addition to the expected and surprising amendments, the Redmond based company has also taken the opportunity to update a number of their native apps as well as introduce some new apps with the 8.1 update that will definitely be well received by users. Or will it?


Here is Mr Jensen Harris from the Windows User Experience team to provide a preview and explain more about Windows Blue ... or 8.1.



Here are some changes that we can expect.

You like them bigger ?  Or would you like them smaller

End users will now have the option to have the tiles 4 times larger or 4 times smaller.

Control the look and feel.

Not a major change really, but you will be able to change the look of the Start Screen. Simply by invoking  a toolbar that appears over the right side of it, with which you can choose background designs and color schemes. Selections you make are applied instantly onto the Start Screen behind this toolbar. (In Windows 8, customizing the Start Screen takes you to a separate configuration page listed under PC Settings where you cannot see changes to your actual Start Screen in real-time.)

Support for ReFS

ReFS is a file system that Microsoft introduced in 2012. It is built upon, and compatible with, NTFS -- the format now commonly used for Windows systems. Windows Blue is the first appearance of this new file system in a client-side version of Windows, where you can select to format a hard drive to use ReFS.

5 new default Apps

Windows Blue comes with five new apps that are pinned to the Start Screen: Alarms, Calculate, Files, Movie Moments and Sound Recorder: Alarms can be used as an alarm clock, countdown timer or stopwatch. Calculate features scientific and standard calculator interfaces. Files is a simple file manager for accessing files stored on your system’s local drive. Sound Recorder is a very bare-bones audio recording app. Movie Moments allows you to load video footage and then can trim both the start and end points of your clip.

Display up to 4 apps on screen at once side-by-side

In Windows 8, you can display two apps alongside one another, each getting its own window column, in a mode Microsoft calls SnapView. Now in Windows Blue, you are given more leeway when you move the borderline dividing the two to resize the area one app takes up on the screen relative to the other. 

Assigned Access

Assigned Access or Kiosk Mode, you can set Windows Blue to automatically run a Windows 8 app after your device starts up and a particular user account has been logged in.

Automatic Web searches and results within the Search Charm toolbar

Under Apps in PC Settings, there is an option that when activated will automatically search the Web when you type keywords into the Search Charm, and filter adult content on the Web as it does this.

More things to sync

The sync settings have been relocated under the Users category in PC Settings, and it has a new option to sync your installed apps and Start Screen customization across multiple Windows 8.1 devices that share your user account. You will also be able to sync your tab and tracking protection configuration in Internet Explorer 11, as well as a long list of other things: app secondary Tiles, Bluetooth device associations, Explorer quick links, file history, input personification, picture password, and tethering.

Snap a photo from the lock screen

Under PC Settings for the lock screen, there is a switch you can turn on to allow you to take a photo with your computer’s webcam or tablet camera by dragging down on the lock screen.




Tuesday 4 June 2013

Netflix Streamlined


Netflix working with Harmonic has developed a set of predefined transcoding presets for its pro media Carbon file based transcoder.  This transcoder optimizes both video and audio in media formats for Netflix.

Harmonic said it has launched predefined transcoding presets for its ProMedia Carbon platform to prep video and audio into media formats optimized for Netflix.

Netflix ended the first quarter of 2013 with 27.17 million U.S. streaming subscribers. This new capability creates predefined transcoding presets for Netflix in standard- and high-definition and MPEG-21-Frame video formats at various frame rates, as well as stereo and 5.1 surround sound audio.

Netflix Director of content partner operations Christopher Fetner stated, "With its ability to store and recall parameters that assure transcoded content meets our stringent quality standards, Harmonic's ProMedia Carbon plays a significant role in the digital supply chains of our content and services partners." 

He also stated "Collaborating with Harmonic on the new transcoding presets for ProMedia Carbon, we're able to ensure content operators have greater ease when delivering to Netflix."

Yoav Derazon, Harmonic’s director of product management for cloud services and transcoding stated,
"Service providers today receive content from a wide variety of sources in many different formats. The challenge is to align the format and quality of that content, making it ready for distribution through Netflix, as well as other over-the-top (OTT) video providers".

The deal with Netflix comes soon after Harmonic identified multiscreen video, alongside High-Efficiently Video Coding (HEVC) and the budding Converged Cable Access Platform (CCAP), as its core business growth drivers. 

Source: Harmonic news



Monday 3 June 2013

Cut out the middle man...

Bit Torrent Sync Alpha  Ever heard of it?

No!


Your missing out, bit torrent sync uses P2P technology. This protocol is very effective for transferring large files across multiple devices, and is very similar to the powerful protocol used by applications like µTorrent and BitTorrent. The data is transferred in pieces from each of the syncing devices, and BitTorrent Sync chooses the optimal algorithm to make sure you have a maximum download and upload speed during the process.

The devices you setup to sync are connected directly using UDP, NAT traversal and UPnP port mapping.


Peer 2 Peer
Sounds good.. It gets better, what about security?

BitTorrent Sync was designed with privacy and security in mind. All the traffic between devices is encrypted with AES cypher and a 256-bit key created on the base of the secret—a random string (20 bytes or more) that is unique for every folder.

There are no 3rd party servers involved when syncing your files. All the files are stored only on your trusted devices, controlled and managed solely by you.

Is it better than Dropbox, SkyDrive or Google sync???

Ah great question. Well in short yes.

If you think about it, all the above you mentioned are only as secure as password listed on the account. Each Dropbox or SkyDrive account has a user and password combo. As soon as someone gets hold of your password, (easy if you are an idiot who uses the same password on every site or application !!!), then you are stuffed. Someone now has access to all your photos, films, docs everything.

Bit torrent sync not only carries an AES cypher with a 256 bit key but we have what is known as a 'Secret' we mentioned it above. But what is the secret?

The secret is a randomly generated 21-byte key. It is Base32-encoded in order to be readable by humans. BitTorrent Sync uses dev/random (Mac, Linux) and the Crypto API (Windows) in order to produce a completely random string. This authentication approach is significantly stronger than a login/password combination used by other services. That's why using a secret generated by BitTorrent Sync is very safe and secure.

So, can other BitTorrent users see my shared files?

No. BitTorrent Sync is based on BitTorrent protocol, but all the traffic is encrypted using a private key derived from the shared secret. Your files can be viewed and received only by the people with whom you share your private secret.

How soon does synchronization start?
When a file is added to the shared folder, the changes start syncing immediately (due to system peculiarities, sync on Mac OS X 10.6 may be delayed up to 10 minutes). If you change a file inside a shared folder, sync will start after the file is saved and/or closed. 

What happens if a file is deleted on one of the devices?
Files deleted from a sync folder on your computer are handled depending on your OS preferences (moved to Trash/Recycle Bin/similar folders or deleted completely). On the other syncing devices these files will be moved to the ‘.SyncTrash’ in their sync folders (‘.SyncTrash’ is hidden by default).

What if several people make changes to the same file?
When a file is changed on one of the devices, it will be recreated as a new copy and synced to the other devices. Bit torrent sync saves only the latest version of the file.

What if files with same names are added from different computers?
We give human action first priority and always consider it right. That's why if several files with the same name are added on different devices, BitTorrent Sync will synchronize the file that was the latest added to 

BitTorrent Sync even if it is not the newest version of the file itself. Previously added files will be deleted, but you can find them in .SyncTrash (if enabled in folder preferences).

What happens when I remove folder from BitTorrent Sync?
If a folder is removed from BitTorrent Sync, all the synced files stay there; incomplete files with the .! sync extension will be deleted.

To find out more visit http://labs.bittorrent.com/experiments/sync.html

The guys at Tekzilla review the bit torrent sync brilliantly !


All comments gratefully received.

Translate

Search This Blog

Wikipedia

Search results