Telecom trends for 2015

In few years there’ll be close to 4bn smartphones on earth. Ericsson’s annual mobility report forecasts increasing mobile subscriptions and connections through 2020.(9.5B Smartphone Subs by 2020 and eight-fold traffic increase). Ericsson’s annual mobility report expects that by 2020 90% of the world’s population over six years old will have a phone.  It really talks about the connected world where everyone will have a connection one way or another.

What about the phone systems in use. Now majority of the world operates on GSM and HPSA (3G). Some countries are starting to have good 4G (LTE) coverage, but on average only 20% is covered by LTE. 4G/LTE small cells will grow at 2X the rate for 3G and surpass both 2G and 3G in 2016.

Ericsson expects that 85% of mobile subscriptions in the Asia Pacific, the Middle East, and Africa will be 3G or 4G by 2020. 75%-80% of North America and Western Europe are expected to be using LTE by 2020. China is by far the biggest smartphone market by current users in the world, and it is rapidly moving into high-speed 4G technology.

The sales of mobile broadband routers and mobile broadband “usb sticks” is expected to continue to drop. In year 2013 those devices were sold 87 million units, and in 2014 sales dropped again 24 per cent. Chinese Huawei is the market leader (45%), so it has most to loose on this.

Small cell backhaul market is expected to grow. ABI Research believes 2015 will now witness meaningful small cell deployments. Millimeter wave technology—thanks to its large bandwidth and NLOS capability—is the fastest growing technology. 4G/LTE small cell solutions will again drive most of the microwave, millimeter wave, and sub 6GHz backhaul growth in metropolitan, urban, and suburban areas. Sub 6GHz technology will capture the largest share of small cell backhaul “last mile” links.

Technology for full duplex operation at one radio frequency has been designed. The new practical circuit, known as a circulator, that lets a radio send and receive data simultaneously over the same frequency could supercharge wireless data transfer, has been designed. The new circuit design avoids magnets, and uses only conventional circuit components. The radio wave circulator utilized in wireless communications to double the bandwidth by enabling full-duplex operation, ie, devices can send and receive signals in the same frequency band simultaneously. Let’s wait to see if this technology turns to be practical.

Broadband connections are finally more popular than traditional wired telephone: In EU by the end of 2014, fixed broadband subscriptions will outnumber traditional circuit-switched fixed lines for the first time.

After six years in the dark, Europe’s telecoms providers see a light at the end of the tunnel. According to a new report commissioned by industry body ETNO, the sector should return to growth in 2016. The projected growth for 2016, however, is small – just 1 per cent.

With headwinds and tailwinds, how high will the cabling market fly? Cabling for enterprise local area networks (LANs) experienced growth of between 1 and 2 percent in 2013, while cabling for data centers grew 3.5 percent, according to BSRIA, for a total global growth of 2 percent. The structured cabling market is facing a turbulent time. Structured cabling in data centers continues to move toward the use of fiber. The number of smaller data centers that will use copper will decline.

Businesses will increasingly shift from buying IT products to purchasing infrastructure-as-a-service and software-as-a-service. Both trends will increase the need for processing and storage capacity in data centers. And we need also fast connections to those data centers. This will cause significant growth in WiFi traffic, which will  will mean more structured cabling used to wire access points. Convergence also will result in more cabling needed for Internet Protocol (IP) cameras, building management systems, access controls and other applications. This could mean decrease in the installing of special separate cabling for those applications.

The future of your data center network is a moving target, but one thing is certain: It will be faster. The four developments are in this field are: 40GBase-T, Category 8, 32G and 128G Fibre Channel, and 400GbE.

Ethernet will more and more move away from 10, 100, 1000 speed series as proposals for new speeds are increasingly pushing in. The move beyond gigabit Ethernet is gathering pace, with a cluster of vendors gathering around the IEEE standards effort to help bring 2.5 Gbps and 5 Gbps speeds to the ubiquitous Cat 5e cable. With the IEEE standardisation process under way, the MGBase-T alliance represents industry’s effort to accelerate 2.5 Gbps and 5 Gbps speeds to be taken into use for connections to fast WLAN access points. Intense attention is being paid to the development of 25 Gigabit Ethernet (25GbE) and next-generation Ethernet access networks. There is also development of 40GBase-T going on.

Cat 5e vs. Cat 6 vs. Cat 6A – which should you choose? Stop installing Cat 5e cable. “I recommend that you install Cat 6 at a minimum today”. The cable will last much longer and support higher speeds that Cat 5e just cannot support. Category 8 cabling is coming to data centers to support 40GBase-T.

Power over Ethernet plugfest planned to happen in 2015 for testing power over Ethernet products. The plugfest will be focused on IEEE 802.3af and 802.3at standards relevant to IP cameras, wireless access points, automation, and other applications. The Power over Ethernet plugfest will test participants’ devices to the respective IEEE 802.3 PoE specifications, which distinguishes IEEE 802.3-based devices from other non-standards-based PoE solutions.

Gartner expects that wired Ethernet will start to lose it’s position in office in 2015 or in few years after that because of transition to the use of the Internet mainly on smartphones and tablets. The change is significant, because it will break Ethernet long reign in the office. Consumer devices have already moved into wireless and now is the turn to the office. Many factors speak on behalf of the mobile office.  Research predicts that by 2018, 40 per cent of enterprises and organizations of various solid defines the WLAN devices by default. Current workstations, desktop phone, the projectors and the like, therefore, be transferred to wireless. Expect the wireless LAN equipment market to accelerate in 2015 as spending by service providers and education comes back, 802.11ac reaches critical mass, and Wave 2 products enter the market.

Scalable and Secure Device Management for Telecom, Network, SDN/NFV and IoT Devices will become standard feature. Whether you are building a high end router or deploying an IoT sensor network, a Device Management Framework including support for new standards such as NETCONF/YANG and Web Technologies such as Representational State Transfer (ReST) are fast becoming standard requirements. Next generation Device Management Frameworks can provide substantial advantages over legacy SNMP and proprietary frameworks.

 

U.S. regulators resumed consideration of mergers proposed by Comcast Corp. and AT&T Inc., suggesting a decision as early as March: Comcast’s $45.2 billion proposed purchase of Time Warner Cable Inc and AT&T’s proposed $48.5 billion acquisition of DirecTV.

There will be changes in the management of global DNS. U.S. is in the midst of handing over its oversight of ICANN to an international consortium in 2015. The National Telecommunications and Information Association, which oversees ICANN, assured people that the handover would not disrupt the Internet as the public has come to know it. Discussion is going on about what can replace the US government’s current role as IANA contract holder. IANA is the technical body that runs things like the global domain-name system and allocates blocks of IP addresses. Whoever controls it, controls the behind-the-scenes of the internet; today, that’s ICANN, under contract with the US government, but that agreement runs out in September 2015.

 

1,044 Comments

  1. Tomi Engdahl says:

    Brian Fung / Washington Post:
    FCC says it will review a draft of net neutrality rules and vote on them by the end of February

    Get ready: The FCC says it will vote on net neutrality in February
    http://www.washingtonpost.com/blogs/the-switch/wp/2015/01/02/get-ready-the-fcc-says-itll-vote-on-net-neutrality-in-february/

    Federal regulators looking to place restrictions on Internet providers will introduce and vote on new proposed net neutrality rules in February, Federal Communications Commission officials said Friday.

    It’s still unclear what rules Wheeler has in mind for Internet providers.

    Reply
  2. Tomi Engdahl says:

    PAM4 takes the spotlight at DesignCon 2015
    http://www.edn.com/electronics-blogs/eye-on-standards/4438092/PAM4-takes-the-spotlight-at-DesignCon-2015?_mc=NL_EDN_EDT_EDN_today_20150106&cid=NL_EDN_EDT_EDN_today_20150106&elq=6db3b65852a342ad98aa923f6efbb3d2&elqCampaignId=21020

    With 25+ Gbit/s systems being deployed and 50+ Gbit/s on the white board, the prospects are grim for good old, digital-looking NRZ (non-return to zero) electrical signaling.

    People have preached the PAM4 (4-level pulse amplitude modulation) gospel at #DesignCon for years, some with evangelical passion, but the idea has stayed mostly on the development bench. As long as we can get away with NRZ—even if we have to use complex equalization methods and insert FEC (forward error correction) into the gearbox—we’ll keep using it. But now, as we face the realities of 56 Gbit/s signals on standard, legacy PCB (printed circuit board), PAM4 looks inevitable.

    Reply
  3. Tomi Engdahl says:

    WAM, bam, thank you QAM
    MagnaCom uncloaks high capacity modulation at CES
    http://www.theregister.co.uk/2015/01/07/wam_bam_thank_you_qam/

    CES 2015 Startup MagnaCom is using CES to pitch a technology it reckons offers wireless comms an attractive combination of better spectral efficiency and higher capacity.

    Those claims are based on what the company calls WAM, which it’s pitching as a possible replacement for the ubiquitous QAM-based modulation.

    QAM is a standard modulation scheme in which amplitude and phase are combined to represent as many as 256 states. However, as the number of bits increases, the signal-to-noise ratio deteriorates, capping the capacity of QAM schemes.

    MagnaCom reckons it’s created a modulation scheme that’s both backwards-compatible with QAM, but offers significantly higher spectral efficiency.

    Its specific claim at CES is that it’s demonstrating a QAM-16384-equivalent 14-bit modulation scheme called WAM.

    “The reason QAM is two-dimensional is because there’s an underlying requirement that you use a linear amplifier,”

    “Eliminating the need for complete orthogonality … can only be done with a system that does not mandate linearity.”

    In getting its technology to market, Cohen said, MagnaCom is looking at working with standards bodies like the IEEE and 3GPP, and is part of the 5G standardisation effort.

    “What we are showing [at CES] is a 10dB system gain – that can be translated into significantly longer distances”

    Reinventing the Evolution of Digital Communications
    http://www.magna-com.com/technology/

    Reply
  4. Tomi Engdahl says:

    If 4G isn’t working, why stick to the same approach for 5G?
    Talkin’ ’bout an evolution…
    http://www.theregister.co.uk/2014/12/17/if_4g_isnt_working_why_stick_to_the_same_approach_for_5g/

    The start of 2015 is sure to bring an even greater intensity of interest in what “5G” might be. After all, some operators are saying they will start commercial deployments in 2020, and a five-year time-scale is short enough to induce panic attacks.

    LTE has set a strong precedent here. The specifications that became LTE were first proposed in 2004 and official studies began in 2005, and TeliaSonera switched on the world’s first commercial network right at the end of 2009. But the five-year cycle ahead will be very different from that of 2004-9.

    It was already known, before NTT Docomo and others made their proposals, roughly what LTE would be. It would be an air interface upgrade with more complex RAN and packet core elements than earlier standards, and with a new modulation scheme – but nevertheless, a network evolution of a familiar kind. It would be rolled out in a similar way to 3G and would address familiar targets such as improved speed and latency.

    In the run-up to 5G, all the old certainties have dissolved. Will 5G need a new air interface at all, or should it have several? Is it more important to slash power consumption than push further increases in data rate? Will networks be deployed in a remotely recognisable way after 2020 and will there be any spectrum left for such deployments?

    None of those questions can wait until 2020 to be answered, or at least addressed. Already, 4G is showing its limitations for the business models of the modern carrier.

    Reply
  5. Tomi Engdahl says:

    Verizon wants to sell ‘antiquated’ copper assets, stick to wireless for voice
    Death knell for wired xDSL
    http://www.theregister.co.uk/2015/01/07/verizon_wants_to_sell_antiquated_copper_assets/

    Verizon’s copper networks are now so old it says wireless is a better option for voice and low-speed data services.

    Chairman and CEO Lowell McAdam told a Citi briefing that along with looking for opportunities to offload some of its copper assets, part of the company’s strategy to retire its copper is wireless.

    “We’re moving a lot of customers off copper onto wireless, especially for voice services and lower speed DSL”, McAdam said, adding that it delivers “frankly better” services than “antiquated copper”.

    “There are certain assets on the wireline side that we think would be better off in somebody else’s hands”, he said, a move that would let Verizon concentrate its geographic focus.

    Reply
  6. Tomi Engdahl says:

    Jon Brodkin / Ars Technica:
    Broadcom announces gigabit-enabled cable modem chip, Comcast says it plans to use it this year

    Comcast says it will sell gigabit cable service this year
    Broadcom announces gigabit DOCSIS 3.1 cable chip, and Comcast plans to use it.
    http://arstechnica.com/information-technology/2015/01/comcast-says-it-will-sell-gigabit-cable-service-this-year/

    Broadcom’s cable modem system-on-a-chip relies on DOCSIS 3.1, a faster version of the Data Over Cable Service Interface Specification. “DOCSIS 3.1 is a critical technology for Comcast to provide even faster, more reliable data speeds and features such as IP video to our subscribers’ homes by harnessing more spectrum in the downstream,” Comcast Executive VP Tony Werner was quoted as saying in the Broadcom press release. “By more effectively using our cable plant to grow our total throughput, we expect to offer our customers more than 1 Gigabit speeds in their homes in 2015 and beyond.”

    Gigabit service is generally available only from fiber providers. Comcast’s fastest residential service today is 505Mbps downstream and 100Mbps upstream, but even that relies on fiber instead of cable, just as Comcast’s business offerings do. At a June 2013 industry conference, Comcast CEO Brian Roberts demonstrated a 3Gbps DOCSIS 3.1 connection.

    Broadcom said its “BCM3390 cable modem SoC delivers video content with a nearly 50 percent increased efficiency on existing spectrum allocations and allows for the delivery and use of a new range of content and services. The single device supports high-speed data rates exceeding 1Gbps. The BCM93390 modem reference design with integrated Wi-Fi provides up to 2 Gigabit speeds in the home, providing a path for cable operators to transition to all-IP video.”

    DOCSIS 3.1 “enables higher-order modulations in existing hybrid fiber-coaxial (HFC) networks without changes to the existing cable plant,”

    Reply
  7. Tomi Engdahl says:

    The Wi-Fi Alliance wants to get you off Wi-Fi
    …And onto the Wi-Fi Aware peer-to-peer pingapalooza
    http://www.theregister.co.uk/2015/01/07/the_wifi_alliance_wants_to_get_you_off_wifi/

    CES 2015 The Wi-Fi Alliance is looking to push a new platform that will allow devices to share data even when no Wi-Fi network is available.

    Dubbed Wi-Fi Aware, the platform would have devices share small snippets of data directly to enable applications like multiplayer games, without device upgrades or the need for an access point. Because the system uses existing Wi-Fi hardware to make the exchange, no new kit is needed.

    To implement the technology, software developers will need to update their applications to utilize the new system. Once activated, the Wi-Fi Aware platform will allow devices to send and receive small packets of data with user information and location.

    That data could then be used by the app to trigger user alerts or events such as accessing a site or initiating a connection with another service.

    “The device to device is really a driver for allowing nomadic applications,” Figueroa said.

    “This opens up a lot of temporary, one-off applications.”

    The Alliance also wants to spread the technology into developing regions. Figueroa said that because the system works without the need for either a mobile broadband or Wi-Fi network, it could be useful for connecting people in remote areas and distributing alerts and notifications.

    The Wi-Fi Alliance is hoping to set Wi-Fi Aware capabilities live by mid-year 2015. In the meantime it is hoping to work with developers to get their applications ready for the switch-on.

    Reply
  8. Tomi Engdahl says:

    Advancing rapid visual fiber-optic testing technology
    http://www.cablinginstall.com/articles/print/volume-22/issue-12/features/technology/advancing-rapid-visual-fiber-optic-testing-technology.html

    The most practical field test instrument for troubleshooting optical fiber links in the field is the visual fault locator (VFL), also called the visual fault finder (VFF). This article will discuss existing best field practices and technologies in the visual fiber testing category, along with other methods. It also will challenge traditional field practices and advance current technology a step with a new testing method.

    A VFL operates in the visible light range. It is used to identify individual optical fibers within a cable by sending a red light down the optical fiber. When used as a troubleshooting tool, the optical fiber strand will glow at the point of a break or separation of optical fibers.

    Units that pulse on/off are easier to use when looking for a break. These tools can also be used to detect a damaged optical fiber ferrule.

    A visual light source also is called optical fiber light-emitting diode (LED) or optical flashlight. It is used to test and troubleshoot continuity of optical fiber strands.

    The user should never look at the optical fiber strands until after personally confirming that it is disconnected from potential laser light sources. Laser light sources can cause eye damage.

    The technology called Rapid Visual Fiber Optic Cable Tester, also known as the visual fiber tester (VFT), aims to improve and simplify visual fiber-optic field testing for technicians with little experience.

    Conventional techniques test only one pair or strand at a time, while this new technology offers instant and rapid field testing of multiple pairs or strands, up to any number, simultaneously

    The VFT is a fast, clean and simple compact handheld fiber-optic test solution that enables the simultaneous pre-inspection of all strands of multi-strand fiber-optic cable terminations for defects.

    When using the VFT, results are observed at the fiber links’ far end. Specifically, a bright red dot indicates a good termination, with minimal dB loss. A link with a bright red dot is likely to pass a certification test if the connector tip is cleaned. A link with a dim red dot indicates a poor termination and high dB loss. It will fail a certification test unless the splice or connector is replaced. A link with a dark or black dot indicates a cut fiber. There is no optical continuity, and optical time-domain reflectometer (OTDR) troubleshooting is necessary to locate the break.

    VFT technology is an enhancement to the fiber certification process. It exists in concert with other, established methods of fiber inspection and testing.

    Optical time-domain reflectometer (OTDR)-An OTDR is used to characterize optical power reflected along optical fibers with a graphical signature on a display screen.

    Optical loss test set-The optical loss test set (OLTS) can include a power meter, cable analyzer or certifying tester. The principle technique is to use an optical transmitter (light source) at one end of the cable, and an optical receiver (power meter) at the cable’s other end. This technique is also known as end-to-end attenuation testing

    Strand identifier-A clamp-on unit inserts a macrobend into the optical-fiber cable and thereby is able to detect the light escaping from the optical fiber. This device is used to detect the presence of light, as well as transmit and receive direction on singlemode and multimode optical-fiber cable.

    Fiber-optic microscope-A handheld or desktop microscope with different magnifying areas, such as 250x, 300x, or 400x, is a well-known optical instrument used to visually check the surface of fiber-optic core and cladding on terminated and polished connector ferrules.

    Reply
  9. Tomi Engdahl says:

    CES 2015: Linksys 1200AC, an inexpensive, open-source 802.11ac Wi-Fi router
    http://www.zdnet.com/article/ces-2015-linksys-1200ac-an-inexpensive-open-source-802-11ac-wi-fi-router/

    Summary:Want 802.11ac Wi-Fi speeds at an affordable price and with open-source software under the hood? If that’s you, then you’re going to want to check out Belkin’s Linksys 1200AC

    I love my Linksys WRT1900AC, but with a $279 price-tag, some small-office/home-office (SOHO) and small businesses are reluctant to open their wallets for it. Now, at CES, Belkin, Linksys’s parent company, is introducing a new lower-priced 802.11ac Wi-Fi router: The WRT1200AC Dual Band Gigabit Wi-Fi Router for $179.99

    For a hundred bucks less you also get less of a Wi-Fi router. Instead of a 3×3 MIMO antenna rig, the new WRT1200AC comes with 2×2 MIMO. Otherwise the pair are closer in specifications than you might expect.

    What some users will really, really like about the WRT1200AC isn’t the hardware but the firmware working with it. In partnership with Marvel, Linksys is happy to announce that the open-source Wi-Fi driver for the WRT1900AC chipset has been released to OpenWrt, the makers of one of the most popular embedded Linux firmwares.

    Still, Linksys admits, “This is an initial release, with plans to send the driver to the upstream Linux kernel after refinement. Full open source firmware is planned to be available for the WRT1200AC router at time of release.”

    Reply
  10. Tomi Engdahl says:

    WiGig Gives a Leg Up on 5G
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1325198&

    Engineers can get a leg up on tomorrow’s 5G technologies by working with 60 GHz WiGig products today, says the chief marketing officer of Blu Wireless Technology.

    Wireless chip and system makers with the aim of developing solutions for the evolving 5G cellular standard should embrace WiGig to obtain a head start on the technologies that will be vital to 5G’s success.

    In contrast to its predecessors, 5G will be less a matter of creating a new cellular protocol than bringing into alignment a variety of cellular and wireless local-area networking (LAN) standards to allow them to work synergistically. The vision for 5G is to use heterogeneous networking (het-net), combining cellular and wireless LAN protocols. The combination will maximize effective bandwidth by switching between these systems based on channel availability at the local level. The het-net approach is so important to the evolution of 5G that support for it is already being built into upgrades for the existing 4G infrastructure.

    One of the options being explored for urban communications is to make use of the millimeter-wave (mm-wave) RF spectrum between 10 GHz and 100 GHz. Not only is this block of spectrum underused, it offers opportunities for channels with much higher bandwidths than are available in the sub-5 GHz region employed for 2G, 3G, and 4G. Bandwidth of up to 9 GHz has already been allocated for the 60 GHz range for use by the WiGig wireless LAN standard, making WiGig an excellent candidate for investigation of the potential of mm-wave transmission.

    A key aspect of the mm-wave region is that transmissions have shorter range than the frequencies currently used for cellular because of air and water absorption. The signals are also more sensitive to antenna alignment.

    In the high-density urban environment, absorption not only becomes far less of an issue, the combination of absorption and the directivity of the signals become key advantages. Using technologies such as electronic beam-forming with multiple antennas — already a part of the WiGig standard — the signal can be localized, reducing the interference between multiple users and base stations. This, in turn, allows base stations to be situated closer together to make better use of the high-bandwidth backhaul network.

    Reply
  11. Tomi Engdahl says:

    In-Flight Service Gogo Uses Fake SSL Certificates To Throttle Streaming
    http://it.slashdot.org/story/15/01/07/2030258/in-flight-service-gogo-uses-fake-ssl-certificates-to-throttle-streaming

    In-flight internet service Gogo has defended its use of a fake Google SSL certificates as a means of throttling video streaming, adding that it was not invading its customer’s privacy in doing so. The rebuttal comes after Google security researcher Adrienne Porter Felt posted a screenshot of the phoney certificate to Twitter.

    Gogo Serving Fake SSL Certificates to Block Streaming Sites
    http://www.pcmag.com/article2/0,2817,2474664,00.asp

    Mile-high Web provider Gogo appears to be running man-in-the-middle attacks on its own customers.

    Based on a report by Google engineer Adrienne Porter Felt, Gogo Inflight Internet is serving SSL certificates from Gogo instead of site providers—a big no-no in online security.

    The move could mean that passwords and other sensitive information entered while logged into the Gogo service could have been compromised.

    A member of the Google Chrome security team, Porter Felt last week tweeted a screenshot of her computer during a flight.

    Reply
  12. Tomi Engdahl says:

    4K off, Google Fiber: Comcast, Broadcom tout 2Gbps cable
    Assuming streaming sites can keep up
    http://www.theregister.co.uk/2015/01/07/4k_video_this_year_over_your_existing_connection/

    By this time next year, we should be able to stream 4K video over home cable internet connections. That’s according to Comcast, which has promised a gigabit-broadband service using a modem designed by Broadcom.

    The BCM93390 modem includes “the world’s first DOCSIS 3.1 cable modem system-on-a-chip,” referring to the new standard formally approved in October 2013 and which passed interoperability tests last month.

    Comcast will offer the modem to its customers sometime in 2015, we’re told, and “provide even faster, more reliable data speeds and features such as IP video to our subscribers’ homes by harnessing more spectrum in the downstream,” said VP Tony Werner in a canned statement.

    Broadcom has released some specs for its new modem however, including: two OFDM 196 MHz downstream channels; 32 single-carrier DOCSIS 3.0 QAM downstream channels; two 96MHz OFDM-A upstream channels; 5G Wi-Fi for both 2.4GHz and 5GHz; and eight single-carrier DOCSIS 3.0 QAM upstream channels.

    Reply
  13. Tomi Engdahl says:

    Bill Would Ban Paid Prioritization By ISPs
    http://news.slashdot.org/story/15/01/07/1825228/bill-would-ban-paid-prioritization-by-isps

    In the opening days of the new U.S. Congress, a bill has been introduced in both the House and Senate enforcing Net neutrality, making it illegal for ISPs to accept payment to prioritize some traffic packets over others.

    Democrats’ bill would ban paid prioritization by ISPs
    http://www.itworld.com/article/2866555/democrats-bill-would-ban-paid-prioritization-by-isps.html

    Democrats in the U.S. Congress have wasted no time in resurrecting a debate over net neutrality rules, with lawmakers introducing a bill that would ban paid traffic priority agreements between broadband providers and Web content producers.

    The reintroduced Online Competition and Consumer Choice Act, which failed to pass after Democrats introduced it last year, is designed to prevent broadband providers from creating Internet fast lanes and slow lanes, based on the ability of Web content providers and services to pay for faster speeds, sponsors said.

    “The Internet must be a platform for free expression and innovation, and a place where the best ideas and services can reach consumers based on merit rather than based on a financial relationship with a broadband provider,” Leahy said in a statement. “The Online Competition and Consumer Choice Act would protect consumers and sets out important policy positions that the FCC should adopt.”

    Reply
  14. Tomi Engdahl says:

    4G base station accused in vain of TV interference in Finland

    TV Picture disorders anguish of thousands of Finns have at times overloaded telecom operators and Digita common frequency service telephone lines. Often faults cause was the rapid 4g mobile phone base stations that are fast rate built in different parts of Finnish.

    “It seems to me that, whether the TV antenna system what the problem may be, it is accused of 4g masts,”

    4g networks in the area is home to some 2.5 million households.

    “Over the past year, only about 0.27 per cent of these have occurred in disorder cases that have been caused by a 4g network expansion work. In addition, the interference is only for those households with a radio antenna amplifier”, Niiranen says.

    0.27 percent means 6 750 households.

    “The problem is certainly more to the fore when the 4g network is being built in a sparsely populated area. There is a drastic antennas amplifiers, because the houses are located in the TV station’s viewing area peripheral countries,”

    If the reason for it would have been a 4g network, the antenna should be mounted about 60 euro price filter. The problem arises when a strong 4G signal is emitted from an antenna amplifier, which is set to receive a weak TV signal. If 4g network interferes with the TV broadcast, it appears either in the image-blocking or transmission breakage completely. The fault may relate to all channels or only a part of the channels. Fast 4G network will be built in the 790-862 megahertz (MHz) frequency range. The television broadcasts on several frequencies, with a maximum of 790 MHz..The problems usually begin abruptly, when the base station is introduced.

    Source: http://www.hs.fi/kotimaa/a1420610207503

    Reply
  15. Tomi Engdahl says:

    EU net neutrality: Don’t worry, we’re now safely in the hands of … Latvia
    Talk of ‘compromise’ makes telcos happy. Euro Parliament, not so much
    http://www.theregister.co.uk/2015/01/08/latvia_to_push_eu_towards_net_neutrality_compromise/

    EU member states will push for a “compromise” on net neutrality over the next six months, after Latvia, which took over the six-month rolling presidency of the European Council of Ministers last week, published its list of priorities for the first half of this year.

    Notably, it says it will seek “an overall compromise” on the so-called telecommunications package. Such language has raised warning flags from net neutrality advocates.

    The law, pushed by ex-digi tsar Steelie Neelie Kroes, is a sprawling piece of legislation covering telecoms companies regulation, coordination of the use of radio spectrum, roaming charges, and yep, you guessed it, net neutrality.

    The draft of the law passed by the Parliament in April significantly strengthened net neutrality rules, but at its last meeting of national ministers in November, the Council appeared to be moving the opposite direction.

    Reply
  16. Tomi Engdahl says:

    UK
    Police radios will be KILLED soon – yet no one dares say ‘Huawei’
    Why 4G is no solution for emergency services
    http://www.theregister.co.uk/2015/01/08/airwave_tetra_switch_off_gov_services_onmishambles/

    In less than 18 months’ time the police radio network will be switched off. There is no obvious replacement and the looming omnishambles is turning into a bonanza for Arquiva, the only company brave enough to offer a solution.

    The British police and the other emergency services use a system called Airwave. This uses a technology called Tetra (Terrestrial Trunked Radio) which is half way between a mobile phone system and a walkie talkie. It’s an ancient technology and very poor at mobile data, which runs at 7.2kbs. There is a standard to boost that to 700kbps but it has never been implemented. Instead the plan is to replace it with 4G.

    The new £1.2bn Emergency Services Network contract will replace the previous £2.9bn digital radio communications supplied by one company, Airwave.

    Airwave revolutionised policing in many rural areas but more recently has been criticised for being too costly as it was set at a fixed price, with escalation, more than a decade ago.

    “It was never cheap,” said Neyroud, “but given what you were asking it to do it was always going to cost,”pointing out that it replaced a system of UHF and VHF that was incredibly patchy and unreliable.

    Push (and wait) to talk

    What’s often not appreciated by the mobile community is the issue of latency. Anyone who has used a walkie-talkie knows that the instant you press the button the person at the other end can hear you.

    Mobile phone push-to-talk systems are rarely like this. You press the button, it switches to the right app, fires up, makes an IP connection and then starts the communication. This is not instant. Indeed, using such a system where you can see and hear the other person is un-nerving, with a significant delay that is more than an echo. Even a traditional 2G or 3G voice call has a little latency which you can hear if both people are in the same room.

    Push-to-talk latency isn’t a problem in the “it might replace SMS” scenario the mobile industry once envisaged for it, but it is in an emergency.

    With 4G you might hope that this is fixed. VoLTE has fantastically low latency. We made a call at Vodafone’s labs recently and it’s what surprised us most. But there are a host of other problems. Push-to-talk isn’t in the specification yet, it needs LTE Release 13 and that’s a little way off. Phase 2 of the specification for mission critical push-to-talk over LTE has a completion date of June 2015.

    That is just the beginning of the problem. If it uses 4G it needs 100 per cent 4G coverage, and we don’t even have 100 per cent coverage if you combine everything on 2G, 3G and 4G.

    There are systems in place to give emergency services priority but network congestion is still going to affect the ability of the backhaul infrastructure to cope.

    The Home Office issues licenses for the emergency services to set a bit on the SIM to enable MTPAS (Mobile Telecommunication Privileged Access Scheme), previously called ACCOLC (Access Overload Control), and still informally called that. There is a limited pool of MTPAS SIMs and the police force which wants one has to get its mobile operator to fill in the paperwork for the Home Office to request it. The IMSI of the enabled SIM is registered with the network.

    MTPAS allows different levels of priority; it’s an eight bit flag so in theory there could be 256 levels of importance, but in practice only one bit is used

    The mobile industry believes that with push-to-talk and MTPAS it can deliver an emergency services, service.

    One of the important things Tetra does – and cellular does not – is device-to-device and mesh communications. For the most part Tetra is like a phone system in that it uses base stations and towers to give spectrum re-use, but in many emergency situations the towers won’t survive the tsunami, earthquake, bomb or whatever. So Tetra devices can talk directly to each other. It is also possible to use a repeater, a car or truck can be brought to the scene which will relay signals. Tetra uses 2w at around 400MHz so range is rarely an issue, but not having a direct mode is and LTE cannot do it.

    There is an LTE specification called PROSE which specifies repeaters and gateways – but a hugely important aspect of emergence services work is the use of groups. The LTE-Broadcast specifications could be hijacked to do groups but LTE-Broadcast won’t work though a PROSE gateway.

    Besides, LTE-Broadcast is designed for covering huge groups. It’s been demonstrated providing video feeds for whole crowds at football matches

    It might be easier if the emergency services all stayed on the frequency allocated to them, but that’s not an option with LTE. Tetra occupies 25KHz per channel; DMR uses 12.5KHz; while pretty much the minimum you can use for LTE is 5MHz. You just wouldn’t get enough channels out of the available spectrum.

    Neyroud believes the answer lies in the 700MHz “digital dividend” spectrum which is becoming available

    It needs to be custom made to do both Tetra and LTE and be robust. “Give a cop an iPhone and he’ll break it in 10 minutes” Neyround told us.

    There is one company working on just this. A company with great manufacturing, radio infrastructure and device expertise and which could deliver just what the emergency services want.

    Unfortunately that company is Huawei, which despite constant protestations that it’s not a backdoor to the Chinese government is still regarded with suspicion by anyone making strategic Telco decisions

    The Tetra system will be retained under an extension clause and while there are officially a number of bidders to take this on, only the infrastructure company Arqiva is really in the running.

    That extension to Tetra will have to be quite a long one.

    Reply
  17. Tomi Engdahl says:

    What’s Next in Wireless: My 2015 Predictions
    http://newsroom.t-mobile.com/issues-insights-blog/2015-predictions.htm

    1. The wireless revolution has not only been sparked, it’s become a movement – and it’s not slowing down, it’s speeding up….to warp speed.

    2. We’ll go toe-to-toe with Verizon’s network almost everywhere … and win.

    3. The competition will continue to bumble along.

    4. Wearables and phablets will be the big device stories of 2015 (and maybe some connected cars!)

    5. I’ll be in conversation with nearly 2M Twitter followers (at the very least 1.5M…) by the end of the year.

    6. DC is going to be very busy on regulatory issues for the Wireless Industry. Like it or not!

    7. MetroPCS will pull even further ahead of the prepaid pack.

    8. We’ll bring Un-carrier to whole new groups of people.

    Reply
  18. Tomi Engdahl says:

    Ericsson Fires LTE over WiFi Salvo
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1325216&

    Ericsson’s next-generation Radio Dot picocell base stations will be able to flexibly send data over licensed LTE or unlicensed 5GHz WiFi bands.

    Swedish telecoms equipment vendor Ericsson is using this week’s International Consumers Electronics Show (CES) in Las Vegas to showcase its latest product and technology that it says will offer data hungry smartphone users concurrent access to both licensed and unlicensed spectrum.

    Set to be part of its small cells line-up, the company is readying a device in its 6402 series of Radio Dot picocells to which it has added a technology dubbed Licensed Assisted Access (LAA). This is a sub-set of LTE-Advanced technology aimed at allowing carriers to aggregate and “fairly share” the public 5GHz band with the unlicensed WiFi spectrum. The proposition is that 5GHz services will handle the mobile data heavy lifting, particularly in indoor locations where it would be deployed on the picocells alongside 3G and LTE. For now, the signalling will still be the job of the conventional cellular network, but when required, could flip the LTE payload over to the public band, thus taking advantage of the higher capacity available there.

    According to its calculations, using just 4% of the 5GHz band, LAA can provide up to a 150Mbit/s data rate increase to smartphone users.

    This is a clear sign that Ericsson sees such a combination of licensed and unlicensed frequencies as yet another potential path to the much vaunted 5G networks of beyond 2020, in whose development and standardization the Swedish group is playing a major role. The company plans to have its first small cell product ready by the fourth quarter of this year — with indications that it will retail at about $2000- and, in a co-ordinated announcement at the CES, said it has teamed with T-Mobile in the US to trial the technology over the coming months.

    “Currently, there is approximately 550MHz of under-utilized spectrum in the 5GHz Unlicensed National Information Infrastructure (UNII) band, which is available for any use within the FCC’s rules for the UNII band,” wrote Neville Ray

    Reply
  19. Tomi Engdahl says:

    The Cluetrain Manifesto:
    Two “Cluetrain Manifesto” authors present 121 “New Clues” to take on threats to the Internet of 2015

    Hear, O Internet.
    It has been sixteen years since our previous communication.
    http://cluetrain.com/newclues/

    We come to you from the years of the Web’s beginning. We have grown old together on the Internet. Time is short.

    We, the People of the Internet, need to remember the glory of its revelation so that we reclaim it now in the name of what it truly is.

    Reply
  20. Tomi Engdahl says:

    Bypassing Broken SIP ALG Implementations
    http://hackaday.com/2015/01/11/bypassing-broken-sip-alg-implementations/

    The SIP protocol is commonly used for IP telephone communications. Unfortunately it’s notorious for having issues with NAT traversal. Even some major vendors can’t seem to get it right. [Stephen] had this problem with his Cisco WRVS4400N router. After a bit of troubleshooting, he was able to come up with a workaround that others may find useful.

    The router had built in SIP ALG functionality, but it just didn’t work. [Stephen] was trying to route SIP traffic from a phone to an Asterisk PBX system behind the router.

    Rather than re-configure all of the phones in the organization, [Stephen] made one change on the Asterisk system. He setup an iptables rule to forward all incoming traffic on UDP port 5060 to the new SIP port. Now all of the phones are working with minimal changes across the organization. It’s a lot of hassle to go through just because the router couldn’t handle SIP correctly, but it gets the job done.

    Reply
  21. Tomi Engdahl says:

    Quentin Hardy / New York Times:
    Following Microsoft’s new Skype Translator, Google to announce major update to Translate app, add automatic transcription for popular languages — Language Translation Tech Starts to Deliver on Its Promise — The tech industry is doing its best to topple the Tower of Babel.

    Language Translation Tech Starts to Deliver on Its Promise
    http://bits.blogs.nytimes.com/2015/01/11/language-translation-tech-starting-to-deliver-on-its-promise/?_r=0

    The tech industry is doing its best to topple the Tower of Babel.

    Last month, Skype, Microsoft’s video calling service, initiated simultaneous translation between English and Spanish speakers. Not to be outdone, Google will soon announce updates to its translation app for phones. Google Translate now offers written translation of 90 languages and the ability to hear spoken translations of a few popular languages. In the update, the app will automatically recognize if someone is speaking a popular language and automatically turn it into written text.

    Certainly, the technology of turning one tongue to another can still be downright terrible – or “downright herbal,” as I purportedly said on a test of Skype. The service also required a headset and worked best if a speaker paused to hear what the other person had said. The experience was a little as if two telemarketers were using walkie-talkies.

    But those complaints are churlish compared with what also seemed like a fundamental miracle: Within minutes, I was used to the process and talking freely with a Colombian man about his wife, children and life in Medellín (or “Made A,” as Skype first heard it, but it later got it correctly). The single biggest thing that separates us — our language — had started to disappear.

    Reply
  22. Tomi Engdahl says:

    CES 2015: Ultra-High-Definition 4K TV over Copper
    http://www.eetimes.com/document.asp?doc_id=1325246&

    Sckipio G.fast enables telco delivery of 4K TV

    Want to watch UHDTV? Try your telco’s copper. The first ultra-high-definition (UHD) content delivered to a 4K TV over existing twisted-pair copper infrastructure of telecommunication companies (telcos) was demonstrated at the International Consumer Electronics Show (CES 2015, Jan. 6-9, Las Vegas). Sckipio Technologies’ (Ramat Gan, Israel), which makes G.fast (pronounced gee dot fast) chipsets, claims DSL cannot deliver UHD 4K TV, but its G.fast ultra-broadband networks can over standard telco twisted-pair copper lines.

    Baum claims that up to 16 concurrent subscribers can be supported with 4K TV per telco distribution point.

    G.fast for UHD 4K TV was just approved by the International Telecommunication Union (ITU) last month and delivers up to 1 Gbit per second (Gbps) over standard copper telephone lines. The typical infrastructure uses fiber lines to deliver high-definition signals — 200Mbps per four subscribers — to a distribution point close to 16 subscriber homes. From there G.fast delivers the UHD 4K TV signal over standard copper lines for the last 400 meters (1,312 feet).

    The set-up also included the first OpenFlow demonstration over a commercial G.fast distribution point unit. The open framework should speed telco adoption over proprietary alternatives, according to Baum, who noted that OpenFlow is a more modern way to control data flow by serving as the enabling component in a software defined network. SDNs allow quick configuration by telcos for independent point-to-point connections.

    Reply
  23. Tomi Engdahl says:

    A New Direction for AVB: Time Sensitive Networking (TSN) for Industrial Control
    http://controlgeek.net/blog/2015/1/9/a-new-direction-for-avb

    For many years, I’ve been following and have written a lot about IEEE Audio Video Bridging (AVB), an open standard way of transmitting audio and video over Ethernet using special network switches. It’s a fascinating standard, but in the live show audio market, it seems to me that AVB has been eclipsed by Audinate’s proprietary Dante technology

    As a result of writing these pieces, last Fall I met Greg Schlecter, Technology Marketing Strategist of Intel, who told me about new developments in a fascinating new direction for AVB, for another industry that needs precise, timely delivery of data: industrial control. This work has been under way for some time; as part of the effort, in 2012, the IEEE Audio Video Bridging standards task group was renamed to the “Time Sensitive Networking” (TSN) task group to reflect the new, larger focus of the group.

    AVB was originally designed to transport audio over standard Ethernet networks; to do so, it has to be able to work to a very high degree of time precision. Pro audio, for example, often uses a sample rate of 48,000 samples per second, and the precise synchronization of those samples for playback is critical for digital audio to work properly. In addition, on live shows we often have to send signals far and wide around a facility, but typically not over the internet (unless it’s streamed, which is typically a different specialty). Modern Ethernet is well suited for both of those needs.

    Industrial control has similar needs. For example, a factory may have large machines spread out over a facility, which also need tight synchronization.

    Big companies like Intel and GE are involved in the TSN effort, so it’s possible that this will gain traction in the controls market soon, and it’s possible that AVB will still end up backstage as a backbone for scenic automation systems.

    Reply
  24. Tomi Engdahl says:

    Implement a VXLAN-based network into an SoC
    http://www.edn.com/design/wireless-networking/4438277/Implement-a-VXLAN-based-network-into-an-SoC-?_mc=NL_EDN_EDT_EDN_today_20150112&cid=NL_EDN_EDT_EDN_today_20150112&elq=88120662e220454aaed2022b3c2a87b3&elqCampaignId=21114

    With the growth in cloud computing applications, the industry is transitioning from traditional campus enterprise data centers to hyperscale cloud and mega data centers. According to IDC Research, server shipments to cloud service providers are expected to represent 43% of all servers by 2017[1].

    Furthermore, the number of internet connected devices generating data traffic is expected to exceed 20 billion in 2018[2] driven by the increase in IoT applications. These industry trends put significant demands on all areas of large cloud data centers including compute servers, storage and network. To achieve higher performance and reduce the cost and time of deploying these new applications, cloud and mega data center operators are redesigning their data center networking architectures to utilize virtual environments. By using Virtual Extensible Local Network (VXLAN)-based overlays operating over 10G Ethernet IP, data center operators can simplify the management of the networks and eliminate network bottlenecks in hyperscale cloud computing data centers.

    Virtualized networks have evolved from classic-tiered north-south access/aggregation architectures to flat leaf-spine topologies driven by the increasing amount of east-west server-to-server communication. However, the data traffic in flat leaf-spine network topologies is encountering bottlenecks due to limits in the network infrastructure. Since server-to-server data traffic can consume up to 80% of data communication within cloud data centers, the network must scale to meet the increasing demands. For example, the traditional Layer 2 (L2) networks defined in IEEE 802.1Q standard virtual local area networks (VLANs) tagging has a limit of 4094 IDs, which can easily be exceeded.

    Network overlay technologies such as VXLAN and network virtualization using generic routing encapsulation (NVGRE) solve the scalability limits of VLANs by “stretching” the L2 network. VXLAN is an L2 overlay scheme defined by the Internet Engineering Task Force (IETF) to provide a framework for overlaying virtualized L2 networks over Layer 3 (L3) networks[3].

    As the popularity of the VXLAN protocol expands, designers are implementing VXLAN-based Ethernet ports into SoCs running over 10G or higher capacity network technologies. A wide variety of SoCs with VXLAN-enabled Ethernet ports are appearing in software defined networking (SDN) switch ICs, SDN-enabled communications processors, intelligent NICs (iNIC) and micro server host processors using 64-bit ARM® v8 CPUs

    Designers building next-generation SoCs for cloud computing and mega data centers typically optimize the SoC for low latency, low power, high performance and reliability-availability-serviceability (RAS), as well as integrate advanced protocols in leading process technologies such as 16/14 nanometer (nm) FinFET.

    The use of network overlays running over 10G Ethernet is eliminating network bottlenecks in hyperscale cloud computing data centers. The data center network has evolved from north-south tiered architecture to flat leaf-spine topology to support the range of micro servers and scale-out servers running distributed workloads. The emergence of virtual overlay protocols such as VXLAN and NVGRE has enabled the network to scale beyond 4094 links and is driving semiconductor providers to add VXLAN protocol support into their 10G Ethernet ports.

    Reply
  25. Tomi Engdahl says:

    Report: Broadband service providers battle customer expectations, regulatory uncertainty
    http://www.cablinginstall.com/articles/2015/01/acg-broadband-battle.html?cmpid=EnlCIMJanuary122015

    In its report on the video infrastructure market’s third quarter of 2014, ACG Research contends that hesitancy because of regulatory uncertainty cancelled the increase in gigabit broadband access network deployments to leave the market flat.

    Overall, broadband access network initiatives, including fiber to the home (FTTH), continue to benefit service providers, states the analyst. At the U.S. Tier 1 level, AT&T U-verse has created a $15 billion annualized revenue stream that is growing at nearly 24%, Verizon’s FiOS revenues increased 13% year-over-year in 3Q14, and Comcast can boast 21.6 million high-speed Internet customers, increasing by 315,000, the market research firm notes.

    However, service providers find themselves pulled in different directions. “Service providers have a finite amount of capex,” says Greg Whelan, principal analyst and consultant at ACG Research. “Demand for higher speeds and ubiquitous coverage conflict with regulatory uncertainty.”

    service providers have begun to focus on improving customer experience. Wi-Fi capabilities are important elements here
    adding seamless roaming and voice over Wi-Fi.

    Reply
  26. Tomi Engdahl says:

    Google hopes Kevlar cabling can avert sharks’ fiber-optic Internet feast
    http://www.cablinginstall.com/articles/2015/01/google-kevlar-cable-blog.html

    As recently reported by Australia’s The Daily Dot, “If you live in Southeast Asia and can’t stream YouTube videos or access Facebook, sharks may be to blame…Whatever the explanation, there are undoubtedly issues with sharks attacking undersea Internet cables.

    To prevent sharks from chomping through fragile and expensive fiber-optic wires, Google, which has pledged to collaborate on a similar $300 million undersea cable to Japan, has started wrapping its cables in Kevlar.

    Reply
  27. Tomi Engdahl says:

    IEEE to adopt HDBase-T, standardizing UHD transmission over Category 6 cabling
    http://www.cablinginstall.com/articles/2015/01/ieee-1911-hdbaset-category-6-cabling.html

    The Institute of Electrical and Electronics Engineers (IEEE), along with the HDBase-T Alliance, jointly announced that the IEEE Standards Association Standards Board approved the HDBase-T Specifications 1.1.0 and 2.0 as part of the IEEE’s standards portfolio. The HDBase-T standard will become IEEE 1911 standard once the adoption process is complete.

    “HDBase-T is a successful technology for long-distance ultra-high-definition distribution of digital media today, with hundreds of HDBase-T products currently commercialized,”

    “HDBase-T enables all-in-one transmission of ultra-high-definition video through a single 100-m/328-ft Category 6 cable,” the announcement explained, “delivering uncompressed 4K video, audio, USB, Ethernet, control signals, and up to 100 watts of power. HDBase-T simplifies cabling, enhances ease-of-use, and accelerates deployment of ultra-high-definition connectivity solutions. The cost-effective LAN infrastructure and power transmission support also help reduce and simplify installation and electrical costs.”

    HDBaseT Alliance now offering 2 programs approved for InfoComm certification
    http://www.cablinginstall.com/articles/2015/01/hdbaset-infocomm-rus.html

    The HDBaseT Alliance, the cross-industry group tasked with promoting and advancing the HDBaseT standard, announced that it has been named an official InfoComm International Renewal Unit (RU) Provider. According to a press release, this will allow the more than 9,000 professionals holding InfoComm International’s Certified Technology Specialist credential to earn renewal units towards their certification by completing the HDBaseT Alliance’s Installer Expert education programs.

    Reply
  28. Tomi Engdahl says:

    Should spectrum hog TV give up its seat for broadband? You tell us – EU
    It’s granny vs tween in battle of the spectrum
    http://www.theregister.co.uk/2015/01/13/tv_or_broadband_broadband_or_tv_theres_only_one_way_to_find_out_a_public_consultation/

    The European Commission wants help in deciding whether or not to allow terrestrial TV to hang on to valuable spectrum.

    The Commish launched a public consultation on Monday asking industry, academia and users of TV or wireless broadband to speak their brains on how to proceed in allocating the 700 megahertz band.

    Currently, the 700 MHz band (694-790 MHz) is primarily used for free-to-air telly via rooftop antennas, but the frequencies could also provide wireless broadband at higher speeds with better geographical coverage – hence the Commission’s dilemma.

    Lamy proposed two options. First, a phasing-out of terrestrial TV from the 700 MHz band which would see the bandwidth completely dedicated to wireless broadband across Europe by 2020 (give or take two years). To make this more palatable to broadcasters, he advised “regulatory security and stability for terrestrial broadcasters in the remaining UHF spectrum below 700 MHz until 2030”

    The second option would be to allow downlink-only wireless broadband in the 700 MHz band, giving broadcasters priority – an idea that does not sit well with ISPs.

    Reply
  29. Tomi Engdahl says:

    IEEE to study new BASE-T data rates for data centers and enterprise applications
    http://www.cablinginstall.com/articles/2014/12/ieee-25-5-gbaset.html

    As the Ethernet ecosystem begins to accept the idea of speeds outside the current factor of 10, a new generation of ever-more sophisticated enterprise applications is emerging.

    The IEEE 802.3 Ethernet Working Group recently fielded calls for interest (CFIs) for new data rates for the BASE-T family of Ethernet PHYs forming two new IEEE 802.3 study groups as a result. One, driven by mounting bandwidth needs of wireless access points, will address Next Generation Enterprise Access BASE-T PHY, and the second, for 25-Gbit/sec data transmission over balanced twisted pair cabling, or 25GBASE-T, targeting enterprise data centers.

    Reply
  30. Tomi Engdahl says:

    SDN/NFV-enabled Carrier Ethernet chips will lower CPE port counts for CE 2.0
    http://www.cablinginstall.com/articles/2014/12/sdn-nfv-low-port-count.html

    Communications semiconductor supplier Vitesse (NASDAQ: VTSS) has unveiled an addition to its Serval Carrier Ethernet IC line designed to enable application of software-defined networking (SDN) and network functions virtualization (NFV).

    When combined with Vitesse’s CEServices software, Serval-2 Lite supports SDN-ready touchless provisioning and remote control of MEF CE 2.0 services, the company adds. The device also uses the company’s VeriTime IEEE 1588 timing technology to support 4G TD-LTE and LTE-Advanced wireless backhaul as well as Hierarchical QoS (H-QoS) features for support of MEF CE 2.0-compliant delivery of SLA-based Carrier Ethernet services.

    The company says it will make Serval-2 Lite samples available in January 2015.

    Reply
  31. Tomi Engdahl says:

    Analyst: Increased data center deployments feed active optical cable market
    http://www.cablinginstall.com/articles/2014/12/lightcounting-datacenter-aoc-market.html

    While the market for mainstream active optical cables (AOCs) such as 4×10-Gbps QSFP+ AOCs has historically been restricted mainly to high-performance computing (HPC) applications, the use of AOCs in data center networks has begun to increase, says LightCounting. The market research firm expects 10GbE SFP+ AOCs will represent 25% of the projected $98 million AOC market in 2014, thanks to growth in data center deployments. In fact, additional data center interest in all higher Ethernet speeds “from 25G, 40G, 100G, and even 400G” combined with inter-chassis AOC connections in core routers will boost the AOC market to $266 million by 2020, LightCounting predicts in its new report on the AOC market.

    HPC clusters have been well suited to the use of AOCs, and InfiniBand-based clusters are mostly built with AOCs today
    HPC suppliers expect a return to strong growth rates in 2015

    However, data center Ethernet connections via AOCs has been the “next big thing” for quite some time, and LightCounting says “next” is finally “now,” thanks to increased use of 40GbE as well as some clouds and Big Data applications adopting InfiniBand.

    Meanwhile, multiple hyperscale data center operators are making plans for 25GbE at the server and 100GbE in their switching fabrics. While early 25G server connections will be mostly copper, AOCs will offer advantages beyond the reach of the next rack, LightCounting predicts.

    Reply
  32. Tomi Engdahl says:

    Google Domains opens to all in the U.S., gets Blogger and Dynamic DNS integration
    http://venturebeat.com/2015/01/13/google-domains-opens-to-all-in-the-u-s-gets-blogger-and-dynamic-dns-integration/

    Google today announced that Google Domains, the company’s domain registration service, is now available to all in the U.S.

    Google Domains first went into testing in June 2014, with the goal of helping businesses not just get online, but to build a proper online presence. To pull this off, Google partnered with website building providers Shopify, Squarespace, Weebly, and Wix.

    Google didn’t share when it hopes to have Google Domains available everywhere. A spokesperson, however, confirmed with VentureBeat that, for now at least, it’s technically still in beta.

    Reply
  33. Tomi Engdahl says:

    Huawei Revenue Increases 20% on Sales of Higher-End Smartphones
    http://www.bloomberg.com/news/2015-01-13/huawei-revenue-increases-20-on-sales-of-higher-end-smartphones.html

    Huawei Technologies Co.’s revenue gained about 20 percent last year, aided by rising sales of higher-end smartphones.

    Huawei, China’s largest maker of phone-network equipment, is widening its portfolio of mobile devices, business-computing products and cloud services. The Shenzhen-based company is working toward a goal announced in April of achieving $70 billion in sales by 2018.

    Huawei into the world’s second-largest maker of equipment for phone networks, behind Ericsson AB (ERICB), without access to the U.S. telecommunications market, where it has battled claims that its gear could allow Chinese intelligence services access for spying. The company has denied the allegations.

    Reply
  34. Tomi Engdahl says:

    Obama Unveils Plan To Bring About Faster Internet In the US
    http://tech.slashdot.org/story/15/01/14/0355249/obama-unveils-plan-to-bring-about-faster-internet-in-the-us

    President Obama is rolling out a new plan to boost the speed of internet connections throughout the U.S. For one, he’ll be asking the FCC for assistance in neutralizing state laws that prevent cities from building municipal broadband services.

    Obama wants to help make your Internet faster and cheaper. This is his plan.
    http://www.washingtonpost.com/blogs/the-switch/wp/2015/01/13/obama-to-help-cities-build-their-own-public-internet-service-to-rival-large-isps/

    Frustrated over the number of Internet providers that are available to you? If so, you’re like many who are limited to just a handful of broadband companies. But now President Obama wants to change that, arguing that choice and competition are lacking in the U.S. broadband market. On Wednesday, Obama will unveil a series of measures aimed at making high-speed Web connections cheaper and more widely available to millions of Americans. The announcement will focus chiefly on efforts by cities to build their own alternatives to major Internet providers such as Comcast, Verizon or AT&T — a public option for Internet access, you could say.

    “When more companies compete for your broadband business, it means lower prices,” Jeff Zients, director of Obama’s National Economic Council, told reporters Tuesday. “Broadband is no longer a luxury. It’s a necessity.”

    The announcement highlights a growing chorus of small and mid-sized cities that say they’ve been left behind by some of the country’s biggest Internet providers. In many of these places, incumbent companies have delayed network upgrades or offer what customers say is unsatisfactory service because it isn’t cost-effective to build new infrastructure.

    “It’s hard to remember a time when we didn’t all use the Internet every day. If we can take our heads back to the mid-to-late ’90s, that was the time when people were starting to say, ‘We want a high-speed Internet connection,’”

    “If the people, acting through their elected local governments, want to pursue competitive community broadband, they shouldn’t be stopped by state laws promoted by cable and telephone companies that don’t want that competition,”

    It will also set the stage for a major legal battle over the FCC’s authority over state laws.

    If Section 706 sounds familiar, that’s because it’s also the legal tool some say should be used to promote net neutrality, or the principle that broadband companies shouldn’t speed up or slow down some Web sites over others.

    COMMUNITY-BASED BROADBAND SOLUTIONS
    THE BENEFITS OF COMPETITION AND CHOICE FOR COMMUNITY DEVELOPMENT AND HIGHSPEED INTERNET ACCESS
    http://www.whitehouse.gov/sites/default/files/docs/community-based_broadband_report_by_executive_office_of_the_president.pdf

    This small loophole could give the FCC much greater control of the Internet
    http://www.washingtonpost.com/blogs/the-switch/wp/2014/01/28/this-small-loophole-could-give-the-fcc-much-greater-control-of-the-internet/

    Reply
  35. Tomi Engdahl says:

    Four Bottlenecks in Virtualized Networks
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1325290&

    These four items can slow down the creation of virtual network functions. Thus, testing is required.

    In November 2012, the ETSI ISF for NFV (Network Function Virtualization) was created by a handful of the world’s leading service providers. There are now over 220 participating companies. When surveyed, nearly all carriers worldwide are expecting to begin VNF (virtualizing network functions) within the next few years. Telefónica has declared that they expect to virtualize 30% of their network functions by 2016. While straightforward in theory, the move from network appliances to running network functions in VMs (virtual machines) on x86-based servers is far from trivial.

    One of the requirements of running a function virtually is having “predictable performance.” This is kind of like going car shopping and requiring any vehicle tested to go 0-to-60 in less than 5 s, get at least 35 miles to the gallon, have sports car handling, and stop on a dime — while not restricting how many people are in the car.

    Over the past two years, four key areas have arisen that create bottlenecks in performance for a virtualized function. By identifying them, tuning them, and tweaking them it’s possible to get predictable performance in highly controlled environments.

    1. Servers and NIC(s)
    Not all COTS (commercial off-the-shelf) servers are created equal. The most fundamental aspects of the server is the CPU (brand, generation, number of cores) and system memory. An Intel E5 v2 has more than four times the packet processing performance of the older Xeon 5600. The NICs (network interface cards) are also critical in terms of performance

    2. Hypervisor and the vSwitch
    The hypervisor is used to virtualize the resources of the underlying server. If predictable performance is required, the resources must be strictly allocated but not highly oversubscribed. Some hypervisors do a better job than others in that they don’t expose shared memory or cache, which could affect other VMs on the system.
    Also some overlay technologies require the virtual switch to perform encapsulation to tunneling technologies like GRE or VXLAN (virtual extensible LAN) which can create significant overhead.

    3. Host OS and guest OS
    The guest operating system used by the VM/VNF may or may not be compatible or optimized with the host OS/virtualization layer resulting in a communication bottleneck. It is important to know how this communication channel works and if it needs to be optimized for various workloads supported by the VNF.

    4. VM/VNF
    The Virtual Network Function itself could be the bottleneck if it is not written properly to handle multiple cores and the memory allocated to it. In some cases the VNF provide will have specific limitations of what can be allocated to it.

    In conclusion, don’t expect to drop a VNF on a COTS server and get wire rate performance. If the system is highly optimized and the resources are explicitly allocated, it is possible to get high performance and predictability.

    Reply
  36. Tomi Engdahl says:

    5G needs new connectivity methods, say Spanish boffins
    ‘No signal’ not wanted in mm-wave world
    http://www.theregister.co.uk/2015/01/14/5g_needs_new_connectivity_methods_say_spanish_boffins/

    While millimeter-wave radio frequencies are hyped as the future of high-speed wireless networks, they’re severely range-limited. A group of Spanish boffins has proposed using user context information like location to help mobile devices get the best speed.

    As they explain in their paper at Arxiv, it’s not enough just to run an efficient search for base stations: whatever algorithm is implemented, users also need fast switching between base stations.

    “even with the massive use of sophisticated directive transmissions the availability of mm-wave access in future mobile networks cannot be continuous”.

    Discontinuities will arise because of the limited coverage areas of mm-wave signals – even the human body is enough to obstruct propagation. Not only does this interrupt voice or data traffic, it’s also a problem for signalling.

    As the paper notes, the kinds of base station search used in mobile telephony – all the way up to LTE – simply won’t work fast enough to cope in a mm-wave world. In those, a device begins joining a network with a random search, and maintains its connection with an omnidirectional synch.

    Neither of these work well under mm-wave constraints

    Reply
  37. Tomi Engdahl says:

    How d’you solve a problem like IANA? Internet captains wrestle over US power handover
    Ultimately it all comes down to trust – or the lack of it
    http://www.theregister.co.uk/2015/01/14/internet_community_continues_to_wrestle_with_iana_handover_plan/

    The plan to transition the critical Internet Assigned Numbers Authority (IANA) contract away from the US government is going to miss its November deadline amid fighting over a key detail.

    The contract has been split into three separate functions. The Internet Engineering Task Force (IETF) has already finalized its proposal for how internet protocols should be managed, and the five Regional Internet Registries (RIRs) are almost done with their joint proposal on IP address management. But the third function – the complex issue of domain names – has left the internet community deeply divided on the issue of who controls the contract once it is moved away from the NTIA.

    A series of meetings at the weekend that were supposed to finalize the proposal were instead focused on identifying where there is agreement and highlighting the areas where there is not.

    In particular, one camp wants the domain naming contract to be given to its the current operator, ICANN

    The other camp wants to maintain the status quo by keeping broadly the same contract but giving it to a shell company overseen by a multi-stakeholder group. That shell company would then award it to ICANN. This second group believes that such an approach is vital if the principle of “separability” is going to achieved.

    Reply
  38. Tomi Engdahl says:

    Confusion, fear and growing pains: ICANN bigwig spells out gTLD headaches
    Akram Atallah tells NamesCon everything is fine. And terrible.
    http://www.theregister.co.uk/2015/01/13/confusion_fear_and_growing_pains_icann_coo_spells_it_out/

    It’s not easy being global DNS overseer ICANN right now.

    The addition of hundreds of new generic top-level domains – gTLDs from .book to .ninja – has been an operational headache; the transition of the key IANA contract is put it under an unfavorable spotlight; and a recent hack of its staff admin systems has raised questions over its technical competence.

    But everything is fine, and terrible, according to its Akram Atallah, ICANN’s former chief operating officer and now president of the non-profit’s global domains division.

    “New [gTLD] registries are still trying to find their footing,” Atallah told the audience. “We are seeing a lot of new entrants.” A takeoff in domain sales, expected following the launch of the new gTLDs, has been slower than expected, forcing ICANN to cut its revenue forecast earlier this year. It takes a cut from domain registries so weak sales hits the organization’s financials.

    for example, there’s little point in snapping up theregister.news if few others buy into the gTLD.

    Despite five reviews in seven years, people are still not happy with how ICANN makes its decisions.

    As for the IANA transition itself – something that some US politicians and most recently the Washington Post have called to be delayed – that is a misunderstanding. “There is a lot of confusion over the IANA transition, a lot of fear, and some sensational statements that are more about bringing attention than having material value,” he said.

    Reply
  39. Tomi Engdahl says:

    Old copper telephone lines soon bring high-speed broadband access to homes when new G.Fast modems to start shipping later this year. Installers of good news is that G.Fast router does not need its own power cord.

    Microsemi and G.Fast circuits develops an Israeli Sckipio demosivat Las Vegas CES G.Fast standard contained in the electric power transmission. This is the so-called. reverse power function.

    G.Fast routers is scheduled for fiber connections in the head, often near the households. Routers are located at these locations is often difficult without easy electrical connection.

    Vegas demo DC power fed into the Micro Semin PD81001-circuit Sckipion chipset-based customer terminal. That power was transferred again into the Micro-go basis Semi circuit module copper line to the Sckipion developed central router (that connects to fiber). Micro Semi module transformed the 12-volt power, the central router requires.

    G.Fast is the ITU recently ratified by the technology that brings gigabit connections to reach a hundred meters from the distribution center. 200 meters away from the standard promises speeds of 200 megabits per second and 250 meters away, there are 150 megabits per second.

    G.fast big advantage modems is that the subscriber to install them. This will facilitate and accelerate the deployment.

    G.Fast speed allows, for example, 4K video transmission over telephone lines.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2276:gigabitin-g-fast-reititin-ei-tarvitse-virtajohtoa&catid=13&Itemid=101

    Reply
  40. Tomi Engdahl says:

    Massive MIMO becoming central part in 5G

    5G technology is now a hot topic research and network operators around the world. It is still unclear what all 5G includes, but already in the present study gives an indication of what is to come. One important part of the 5G can be called “massive MIMO”, which has just been tested at Lund University in Sweden.

    Massive MIMO base station can be equipped with up to hundreds of antennas. They are meant to be small and inexpensive, and basically interchangeable components.

    Hundreds of antennas use requires that the signals from the base station is controlled at the points in the cell in which the terminals are in each case. This requires the base stations of advanced algorithms.

    Tufvesson estimated that the massive mimo find their way into commercial base stations 5-10 years. In practice, this means that the technology is mature, when the first 5G networks will be taken to put in the 2020s during the first years.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2268:massiivisesta-mimosta-keskeinen-osa-5g-ta&catid=13&Itemid=101

    Reply
  41. Tomi Engdahl says:

    If cities want to run their own broadband, let ‘em do it, Prez Obama tells FCC
    … in a nicely written plea to the comms watchdog
    http://www.theregister.co.uk/2015/01/15/obama_fcc_city_fiber/

    President Obama wants US internet watchdog the FCC to overturn bans on city-run broadband in America.

    The White House has sent FCC Chairman Tom Wheeler, and his fellow commissioners, a letter requesting city governments be allowed to build and administer their own fiber networks.

    “Today, President Obama is announcing a new effort to support local choice in broadband, formally opposing measures that limit the range of options available to communities to spur expanded local broadband infrastructure, including ownership of networks,” administration officials said.

    Reply
  42. Tomi Engdahl says:

    ‘Open’ SIMs, brain chips and Google’s Nest: What to expect in wireless in 2015
    We look at events that will shape the industry this year
    http://www.theregister.co.uk/2015/01/06/open_sims_brain_chips_and_googles_nest_what_to_expect_in_wireless_in_2015/

    Reply
  43. Tomi Engdahl says:

    Marriott Abandons Quest to Block Personal Wi-Fi Hot Spots
    http://www.inc.com/kimberly-weisul/marriott-abandons-quest-to-block-your-personal-hotspot.html

    After an unsuccessful legal and PR battle, the hotel chain says it will not seek to be allowed to block personal hot spots in its conference and convention areas.

    “Marriott International listens to its customers, and we will not block guests from using their personal Wi-Fi devices at any of our managed hotels,” a spokesman said in an email.

    Reply
  44. Tomi Engdahl says:

    Crap broadband holds back HALF of rural small biz types
    1 in 2 country folk outfits regularly swear at their routers – SME org
    http://www.theregister.co.uk/2015/01/15/crap_broadband_is_holding_back_rural_small_biz/

    Patchy connectivity in rural areas is hampering small businesses with half complaining of poor broadband speeds, according to an extensive survey from the Federation of Small Businesses.

    The FSB has previously criticised the government’s promise to achieve bandwidth speeds of 24Mbps to 95 per cent of premises by 2017, and 2Mbps to the remaining five per cent, as “not sufficiently ambitious”.

    Other countries have set significantly higher targets, it said. For examples, Finland has committed to offering universal access of 100Mbps to its citizens by 2015, while in South Korea 90 per cent of the population will have access to 1000Mbps by 2017.

    According to the FSB as many as 45,000 SMEs are still using a dial-up connection for business purposes “quite possibly because they have no other option,” it has said.

    “This research paints a worrying picture of a divided business broadband landscape in the UK, and unless addressed highlights a clear obstacle to growth in the coming years. We risk seeing the emergence of a two-speed online economy resulting from poor rural broadband infrastructure.

    Reply
  45. Tomi Engdahl says:

    Network lifecycle management and the Open OS
    http://www.edn.com/design/wireless-networking/4438310/Network-lifecycle-management-and-the-Open-OS

    The bare-metal switch ecosystem and standards are maturing, driven by the Open Compute Project.

    For decades, lifecycle management for network equipment was a laborious, error-prone process because command-line interfaces (CLIs) were the only way to configure equipment. Open operating systems and the growing Linux community have now streamlined this process for servers, and the same is beginning to happen for network switches.

    Network lifecycle management involves three phases: on-boarding or provisioning, production, and decommissioning. The state of network equipment is continually in flux as applications are deployed or removed, so network administrators must find ways to configure and manage equipment efficiently and cost-effectively.

    In the server world, the emergence of Linux based operating systems have revolutionized server on-boarding and provisioning. Rather than using a CLI to configure servers one at a time, system administrators can use automation tools like Chef and Puppet to store and apply configurations with the click of a mouse. For example, suppose an administrator wants to commission four Hadoop servers. Rather than using a CLI to provision each of them separately, the administrator can instruct a technician to click on the Hadoop library in Chef and provision the four servers automatically. This saves time and eliminates the potential for configuration errors due to missed keystrokes, or calling up an old driver.

    This kind of automated provisioning has been a godsend to network administrators and is fast becoming the standard method of lifecycle management for servers. But what about switches?

    Network administrators would like to use the same methodology for switches in their networks, but the historical nature of switches has held them back.

    Traditionally, network switches have been proprietary devices with proprietary operating systems. Technicians must use a CLI or the manufacturer’s own tools to provision a switch.

    Using a CLI for lots of repetitive tasks can lead to errors and lost productivity from repeating the same mundane tasks over and over again.

    Today, three manufacturers (Big Switch, Cumulus, and Pica8) are offering Linux-based OSs for bare-metal switches that allows these switches to be provisioned with standard, Linux tools.

    Application-programming interfaces (APIs) like JSON or RESTful interfaces that interact with the operating system CLI are becoming more common. APIs help make a second parallel between server and network life cycle thinking. Open APIs give developers a common framework to integrate with home grown and off the shelf management, operations, provisioning and accounting tools. Chef and Puppet are becoming common tools on the server side that also extend functionality for networking. Linux-based network OSs are open and offer the ability to run applications like Puppet in user space, simply typing “apt get install puppet” runs them natively on the switch itself.

    The three phases of network lifecycle management: on-boarding or provisioning, production, and decommissioning all benefit from this combination of CLI, Linux, and open APIs. Tools around Linux help build the base of the stack, getting Linux onto the bare metal through even more fundamental tools like zero touch provisioning. A custom script using a JSON API might poll the switch OS for accounting data while in production. And lastly, Puppet could be used to push a new configuration to the switch, in effect decommissioning the previous application in this case.

    Reply
  46. Tomi Engdahl says:

    Net Fix: FCC chief on solving the Open Internet puzzle (Q&A)
    http://www.cnet.com/news/net-fix-fcc-chief-on-solving-the-open-internet-puzzle-q-a/

    With a vote looming on new rules for Internet access, Tom Wheeler talks candidly with CNET News in an exclusive interview about legal challenges, President Obama’s role and being the butt of late-night TV jokes.

    Reply
  47. Tomi Engdahl says:

    Irene Klotz / Reuters:
    OneWeb secures funding from Virgin Group and Qualcomm to build and fly 648 Internet satellites

    Virgin, Qualcomm to invest in Internet-via-satellite venture
    http://www.reuters.com/article/2015/01/15/us-space-satellites-virgin-idUSKBN0KO2TA20150115

    Richard Branson’s Virgin Group and Qualcomm Inc will invest in a venture to build and fly a constellation of 648 satellites that can provide high-speed, global Internet access, company officials said on Thursday.

    WorldVu Satellites Limited, now operating as OneWeb, is currently reviewing proposals from potential manufacture

    The constellation, which is will cost between $1.5 billion and $2 billion, is intended to provide high-speed Internet and telephone services worldwide.

    OneWeb’s spacecraft will weigh less than 300 pounds (136 kg) and be positioned in orbits roughly 750 miles (1,207 km) above the Earth. The company already has been allotted use of a part of the radio spectrum for Internet services.

    “The OneWeb system will extend the networks of mobile operators globally, enabling them to provide coverage to rural and remote areas,” the company said in a statement.

    OneWeb intends to partner with local operators to provide Internet access.

    “Imagine the possibilities for the 3 billion people in hard to reach areas who are currently not connected,” Branson said in a statement.

    Reply
  48. Tomi Engdahl says:

    Ashlee Vance / Businessweek:
    Elon Musk planning low-orbit Internet satellite system to compete with OneWeb, based out of SpaceX’s new Seattle office — Revealed: Elon Musk’s Plan to Build a Space Internet — Because he doesn’t have enough going on, Elon Musk—he of Tesla Motors, SpaceX, SolarCity, and the Hyperloop—is launching another project.

    Revealed: Elon Musk’s Plan to Build a Space Internet
    http://www.businessweek.com/articles/2015-01-17/elon-musk-and-spacex-plan-a-space-internet

    “Our focus is on creating a global communications system that would be larger than anything that has been talked about to date.”

    Reply
  49. Tomi Engdahl says:

    Arctic Fibre Project to Link Japan and U.K.
    A 24-terabit-per-second undersea cable will connect Japan and the U.K. and also bring broadband to remote Arctic communities
    http://spectrum.ieee.org/telecom/internet/arctic-fibre-project-to-link-japan-and-uk

    Meter by meter, a slim vein of fiber-optic cable will soon start snaking its way across the bottom of three oceans and bring the world a few milliseconds closer together. The line will start near Tokyo and cut diagonally across the Pacific, hugging the northern shore of North America and slicing down across the Atlantic to stop just shy of London. Once the cable is live, light will transmit data from one end to the other in just 154 milli­seconds—24 ms less than today’s speediest digital connection between Japan and the United Kingdom. That may not seem like much, but the investors and companies eager to send information—stock trades, wire transfers—are so intent on earning a fraction-of-a-second advantage over competitors that the US $850 million price tag for the approximately 15,600-kilometer cable may well be worth it.

    Arctic Fibre, the Toronto-based company building the cable, is the first to try to connect the globe’s economic centers by laying fiber optics through the long-sought ­Northwest Passage—the pinhole of open water that warmer temperatures have brought to the Arctic. ­British Telecom, China Unicom, Facebook, Google, Microsoft, and ­TeliaSonera are watching closely, but so are tens of thousands of Canadians and Alaskans who stand to gain a huge boost in Internet access.

    Marine surveys will plot the cable’s route this summer, and the line will be custom built to the surveyors’ specifications. The installation is scheduled to start a year from now, and the cable could be in service by the end of 2016.

    All of these benefits stem from a 4-­centimeter cable. Barges will lay it along most of the route

    But to prevent a 1,800-km detour by sea, there is a 51-km section that must cross the Boothia Peninsula, a roadless scrap of tundra in northern Canada.
    The crew must then snowmobile along the cable’s intended route, cutting a trench about 30 cm deep through permafrost to bury the line.

    Reply
  50. Tomi Engdahl says:

    Google Teams With Asian Telecoms For “Faster” Undersea Cable
    http://spectrum.ieee.org/tech-talk/telecom/internet/consortium-announces-new-submarine-cable

    Google is looking to improve connections in the global Internet by adding some more bandwidth. A consortium made up of the tech giant and five of Asia’s largest telecommunications firms has announced a plan to construct a new fiber optic cable that will run along the floor of the Pacific Ocean from Japan to the United States, carrying up to 60 terabytes of data every second.

    The new cable, known as Faster

    “In 2018, the gigabyte equivalent of all movies ever made will cross Asia Pacific’s IP networks every 7 minutes.”

    When Faster enters service, it will join nearly 300 similar cables which are now responsible for carrying an estimated 95 percent of all Internet traffic worldwide. The six-fiber-pair cable, which has an estimated price tag of $300 million, is expected to be operational by the year 2016. It will run from two locations in Japan—the cities of Chikura, in Chiba Prefecture, and Shima, in Mie Prefecture—to the west coast of the United States, where it is expected to run to major cities there like Los Angeles, San Francisco, Portland, and Seattle.

    the new cable is not Google’s first foray into the field. It also invested in the construction of the Unity cable, which began linking the United States and Japan along a route similar to the one Faster will service in 2010. In building Faster, Google teams with the firms KDDI, Global Transit, China Mobile International, China Telecome Global, and SingTel.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*