Telecom trends for 2015

In few years there’ll be close to 4bn smartphones on earth. Ericsson’s annual mobility report forecasts increasing mobile subscriptions and connections through 2020.(9.5B Smartphone Subs by 2020 and eight-fold traffic increase). Ericsson’s annual mobility report expects that by 2020 90% of the world’s population over six years old will have a phone.  It really talks about the connected world where everyone will have a connection one way or another.

What about the phone systems in use. Now majority of the world operates on GSM and HPSA (3G). Some countries are starting to have good 4G (LTE) coverage, but on average only 20% is covered by LTE. 4G/LTE small cells will grow at 2X the rate for 3G and surpass both 2G and 3G in 2016.

Ericsson expects that 85% of mobile subscriptions in the Asia Pacific, the Middle East, and Africa will be 3G or 4G by 2020. 75%-80% of North America and Western Europe are expected to be using LTE by 2020. China is by far the biggest smartphone market by current users in the world, and it is rapidly moving into high-speed 4G technology.

The sales of mobile broadband routers and mobile broadband “usb sticks” is expected to continue to drop. In year 2013 those devices were sold 87 million units, and in 2014 sales dropped again 24 per cent. Chinese Huawei is the market leader (45%), so it has most to loose on this.

Small cell backhaul market is expected to grow. ABI Research believes 2015 will now witness meaningful small cell deployments. Millimeter wave technology—thanks to its large bandwidth and NLOS capability—is the fastest growing technology. 4G/LTE small cell solutions will again drive most of the microwave, millimeter wave, and sub 6GHz backhaul growth in metropolitan, urban, and suburban areas. Sub 6GHz technology will capture the largest share of small cell backhaul “last mile” links.

Technology for full duplex operation at one radio frequency has been designed. The new practical circuit, known as a circulator, that lets a radio send and receive data simultaneously over the same frequency could supercharge wireless data transfer, has been designed. The new circuit design avoids magnets, and uses only conventional circuit components. The radio wave circulator utilized in wireless communications to double the bandwidth by enabling full-duplex operation, ie, devices can send and receive signals in the same frequency band simultaneously. Let’s wait to see if this technology turns to be practical.

Broadband connections are finally more popular than traditional wired telephone: In EU by the end of 2014, fixed broadband subscriptions will outnumber traditional circuit-switched fixed lines for the first time.

After six years in the dark, Europe’s telecoms providers see a light at the end of the tunnel. According to a new report commissioned by industry body ETNO, the sector should return to growth in 2016. The projected growth for 2016, however, is small – just 1 per cent.

With headwinds and tailwinds, how high will the cabling market fly? Cabling for enterprise local area networks (LANs) experienced growth of between 1 and 2 percent in 2013, while cabling for data centers grew 3.5 percent, according to BSRIA, for a total global growth of 2 percent. The structured cabling market is facing a turbulent time. Structured cabling in data centers continues to move toward the use of fiber. The number of smaller data centers that will use copper will decline.

Businesses will increasingly shift from buying IT products to purchasing infrastructure-as-a-service and software-as-a-service. Both trends will increase the need for processing and storage capacity in data centers. And we need also fast connections to those data centers. This will cause significant growth in WiFi traffic, which will  will mean more structured cabling used to wire access points. Convergence also will result in more cabling needed for Internet Protocol (IP) cameras, building management systems, access controls and other applications. This could mean decrease in the installing of special separate cabling for those applications.

The future of your data center network is a moving target, but one thing is certain: It will be faster. The four developments are in this field are: 40GBase-T, Category 8, 32G and 128G Fibre Channel, and 400GbE.

Ethernet will more and more move away from 10, 100, 1000 speed series as proposals for new speeds are increasingly pushing in. The move beyond gigabit Ethernet is gathering pace, with a cluster of vendors gathering around the IEEE standards effort to help bring 2.5 Gbps and 5 Gbps speeds to the ubiquitous Cat 5e cable. With the IEEE standardisation process under way, the MGBase-T alliance represents industry’s effort to accelerate 2.5 Gbps and 5 Gbps speeds to be taken into use for connections to fast WLAN access points. Intense attention is being paid to the development of 25 Gigabit Ethernet (25GbE) and next-generation Ethernet access networks. There is also development of 40GBase-T going on.

Cat 5e vs. Cat 6 vs. Cat 6A – which should you choose? Stop installing Cat 5e cable. “I recommend that you install Cat 6 at a minimum today”. The cable will last much longer and support higher speeds that Cat 5e just cannot support. Category 8 cabling is coming to data centers to support 40GBase-T.

Power over Ethernet plugfest planned to happen in 2015 for testing power over Ethernet products. The plugfest will be focused on IEEE 802.3af and 802.3at standards relevant to IP cameras, wireless access points, automation, and other applications. The Power over Ethernet plugfest will test participants’ devices to the respective IEEE 802.3 PoE specifications, which distinguishes IEEE 802.3-based devices from other non-standards-based PoE solutions.

Gartner expects that wired Ethernet will start to lose it’s position in office in 2015 or in few years after that because of transition to the use of the Internet mainly on smartphones and tablets. The change is significant, because it will break Ethernet long reign in the office. Consumer devices have already moved into wireless and now is the turn to the office. Many factors speak on behalf of the mobile office.  Research predicts that by 2018, 40 per cent of enterprises and organizations of various solid defines the WLAN devices by default. Current workstations, desktop phone, the projectors and the like, therefore, be transferred to wireless. Expect the wireless LAN equipment market to accelerate in 2015 as spending by service providers and education comes back, 802.11ac reaches critical mass, and Wave 2 products enter the market.

Scalable and Secure Device Management for Telecom, Network, SDN/NFV and IoT Devices will become standard feature. Whether you are building a high end router or deploying an IoT sensor network, a Device Management Framework including support for new standards such as NETCONF/YANG and Web Technologies such as Representational State Transfer (ReST) are fast becoming standard requirements. Next generation Device Management Frameworks can provide substantial advantages over legacy SNMP and proprietary frameworks.

 

U.S. regulators resumed consideration of mergers proposed by Comcast Corp. and AT&T Inc., suggesting a decision as early as March: Comcast’s $45.2 billion proposed purchase of Time Warner Cable Inc and AT&T’s proposed $48.5 billion acquisition of DirecTV.

There will be changes in the management of global DNS. U.S. is in the midst of handing over its oversight of ICANN to an international consortium in 2015. The National Telecommunications and Information Association, which oversees ICANN, assured people that the handover would not disrupt the Internet as the public has come to know it. Discussion is going on about what can replace the US government’s current role as IANA contract holder. IANA is the technical body that runs things like the global domain-name system and allocates blocks of IP addresses. Whoever controls it, controls the behind-the-scenes of the internet; today, that’s ICANN, under contract with the US government, but that agreement runs out in September 2015.

 

1,044 Comments

  1. Tomi Engdahl says:

    The Internet of Things will get its own 4g’s – battery life is measured in years

    Corresponding to the LTE standard organizational planning techniques in particular, a modified version of the needs of the Internet of Things (IoT).

    3GPP plans to develop a NB-IoT standard (Narrowband IoT), based on the international use of already existing mobile network technologies. Technology, however, is to be tuned to fit the battery-powered devices, which carried forward the data volumes are typically small.

    NB-IoT will be one of the technologies which can be carried out slow, but long-range networks. This is useful when you combine, for example scattered over a wide area sensors and industrial machines with maintenance intervals of up to years of maturity.

    The basis for a new standard for operating in Narrow-Band LTE and Narrowband CIoT’s a combination. A more detailed technical implementation is decided in the end of the year.

    Operators will be allowed to use nets part of the current LTE frequencies or use the spectrum released by the GSM networks.

    Source: http://www.tivi.fi/Kaikki_uutiset/esineiden-internet-saa-oman-4g-n-akkukesto-mitataan-vuosissa-3486857

    Reply
  2. Tomi Engdahl says:

    NIST Hits Quantum Teleport Key Out of the Park
    NIST bests world’s record at 63 miles
    http://www.eetimes.com/document.asp?doc_id=1327759&

    National Institute of Standards and Technology (NIST, Boulder, Colo.) has announced a breakthrough allowing secure quantum encryption keys to be teleported over 63 miles without a repeater.

    Uncrackable encryption keys use what’s called quantum key distribution (QKD), which transmits the decryption key to the receiver over special communications lines using quantum teleportation to tell whether the key has been observed during transmission. Unfortunately, that requires the transmission of up to a 128,000 single photons—one at a time—through a single optical fiber, requiring ultra-sensitive sensors at the receiving end that detects single photons. So far, only about 15 miles was the longest distance a quantum key could be teleported.

    Hard encryption is easy on a user’s device itself—just set your solid-state- or hard-drive to encrypt the contents of your drive with a long random passcode. However, today transmitting encrypted messages over public networks depends on the computational impracticality of deriving a properly generated private decryption key from its public key and a passcode. Depending on public key encryption is “good enough” for most users, because if the difficulty of cracking the passcode. But for banks, stock exchanges and government “top secrets,” its possible for foreign governments to still crack public key encrypted messages with supercomputers that try every passcode (the hard way, but widely available) or with quantum computers that can guess many passcode simultaneously (the easy way, but not widely available, yet).

    Reply
  3. Tomi Engdahl says:

    BT promises 300Mbps broadband for 10 million homes by 2020
    http://www.engadget.com/2015/09/22/bt-broadband-promises/

    BT’s chief executive Gavin Patterson has emerged today with a laundry list of promises designed to improve broadband speeds, coverage and public confidence in the UK. First up is a commitment to a new, minimum broadband speed of 5-10Mbps, which the company claims will be enough for people to “enjoy popular internet services like high definition video.” The idea to push for a minimum standard was actually introduced by the UK government earlier this year.

    There’s also the matter of the speeds themselves — 5Mbps, most would argue, isn’t enough to support a family or a group of flatmates that regularly use the internet simultaneously.

    To introduce such a proposal, Britain needs stable, extensive broadband coverage. The government’s current target is to offer 2Mbps to everyone in the UK and at least 24Mbps to 95 percent of the population by 2017.

    Patterson says it’s now “potentially available” to increase the UK’s coverage target to 96 percent, although we’ll have to wait and see if that materialises. BT already has a plan to make it happen though — Patterson hinted at a new satellite broadband service that will launch this year and connect remote parts of the UK.

    All of this should create a broad base of usable, if not blazingly fast internet. At the other end of the spectrum, BT is trialling Fibre To The Distribution Point (FTTdp), commonly referred to as “G.fast,” which could jack up the slower speeds experienced by some existing customers. The company is aiming for “a few hundred megabits per second” initially, with plans to raise the speeds to 500Mbps over time. In January, it said this ultrafast broadband would be available to “most of the UK” within a decade. Now, Patterson is improving that target — he says the technology, along with some superior Fibre To The Premises (FTTP) provision, will connect 10 million homes and small businesses by 2020, before supplying “the majority” of UK premises by the end of the decade.

    Reply
  4. Tomi Engdahl says:

    Xiaomi plans ‘mini-mi’ mobile network
    Chinese mobe vendor to Cupertino: ‘You snooze, you lose’
    http://www.theregister.co.uk/2015/09/24/xiaomi_plans_minimi_mobile_network/

    China’s leading phone maker plans to capitalise on its success flogging mobes to the masses by operating its own-brand mobile network.

    Xiaomi has launched Mi Mobile, a wireless MVNO (mobile virtual network operator) that will compete against China’s stodgy state-owned mobile carriers.

    According to Reuters, the company is keeping things simple for now, offering just two low-cost pre-paid plans for voice and data. A 3GB data bundle costs around US$10 while voice calls work out at less than 2 cents a minute. Customers will be able to buy these services through the company’s online store.

    It’s a bold move. To date MVNOs have failed to fire in China where the market is dominated state-owned carriers. Xiaomi is banking on its brand’s high visibility to get punters through the door. Chinese regulators are keen to use MVNOs to inject much-needed competition into the nation’s mobile market.

    China’s Xiaomi announces telecom carrier service, new flagship handset
    http://www.reuters.com/article/2015/09/22/us-xiaomi-telecoms-idUSKCN0RM13620150922

    Xiaomi Inc, China’s leading smartphone maker, announced on Tuesday two prepaid wireless plans to mark its debut as a mobile virtual network operator (MVNO) competing against China’s national carriers.

    Xiaomi’s new wireless business, called Mi Mobile, will offer voice and data services and utilize either the China Unicom or China Telecom networks.

    The launch comes less than six months after Google Inc announced it would launch an MVNO service in the United States called “Fi” that piggybacks off Sprint and T-Mobile’s networks.

    Reply
  5. Tomi Engdahl says:

    Privacy, net neutrality, security, encryption … Europe tells Obama, US Congress to back off
    Letter from 50 MEPs stresses EU will decide own laws, thanks
    http://www.theregister.co.uk/2015/09/23/european_politicians_to_congress_back_off/

    A letter sent to the US Congress by over 50 members of the European Parliament has hit back at claims of “digital protectionism” emanating from the United States.

    Sent on Wednesday, the letter [PDF] takes issue with criticisms from President Obama and Congress over how the EU is devising new laws for the digital era.

    Statement on ‘digital protectionism’
    http://www.marietjeschaake.eu/wp-content/uploads/2015/09/2015-09-22-MEPs-Statement-on-Digital-Protectionism.pdf

    As Members of European Parliament we are surprised and concerned about the strong
    statements coming from US sources about regulatory and legislative proposals on the digital agenda for the EU. While many of these are still in very early stages, President Obama spoke of ‘digital protectionism’, and many in the private sector echo similar words.

    Reply
  6. Tomi Engdahl says:

    Google’s Newest Compression Algorithm Will Stealthily Make Your Internet Faster
    It’s called “Brotli” and it can squish down data like a champ.
    http://www.popularmechanics.com/technology/news/a17445/google-brotli-compression-algorithm/

    When we think about fast our internet is, we tend to think about one facet of it: the speed of the connection. How fat is your pipe? How much data can it slurp down? Make it bigger, make it faster. But there’s another, different way to go about making the internet faster. Make everything on it smaller.

    The newest tool in the data-squishing toolbox is Google’s Brotli algorithm. Officially unveiled and released to the world at large today, it’s the successor to a different compression algorithm called Zopfli, which Google published in 2013. (Both named after Swiss bakery products because of course). And it stands to squish data on the internet by nearly a quarter.

    Zopfli is built on the same wide-spread algorithm that’s used when you make .ZIP files

    Google’s new Brotli algorithm also builds on the past, but breaks from it as well.

    Browsers will have to actively adopt it. But the advantage is that Brotli can shrink data by 20-26 percent more than Zopfli could, according to a study released by Google.

    And that means faster internet for you.
    http://www.gstatic.com/b/brotlidocs/brotli-2015-09-22.pdf

    Reply
  7. Tomi Engdahl says:

    OpenSignal:
    LTE adoption report: Asia has the best performing networks, 20 Mbps connections now commonplace, and the US is falling behind others in speed — The State of LTE — Have we fully entered the 4G age? The answer to that question depends on where on the globe you live.

    The State of LTE (September 2015)
    http://opensignal.com/reports/2015/09/state-of-lte-q3-2015/

    Have we fully entered the 4G age? The answer to that question depends on where on the globe you live. In OpenSignal’s most recent batch data we found that in some countries LTE has become a near ubiquitous technology, providing broadband speeds no matter where you go. In other countries, LTE is just beginning its adolescence.

    But in general we’re seeing both speeds and 4G availability creeping up across the globe as operators deploy new networks in new places and upgrade the networks they’ve already built. Getting a 20 Mbps connection is now commonplace in multiple countries as operators expand into new frequency bands and take advantage of new LTE-Advanced techniques. We’re seeing awe-inspiring data rates in seemingly unlikely places like eastern Europe as operators who entered the 4G race late make up for lost time. We’re also seeing some of LTE’s earliest adopters such as the U.S. fall behind their global peers.

    OpenSignal collects its data from smartphone owners like you through its app (available on iOS and Android). That anonymous crowdsourced data goes into building our impartial coverage maps as well as our analytical reports

    Reply
  8. Tomi Engdahl says:

    Jessi Hempel / Wired:
    Internet.org rebrands its free services mobile website and app as Free Basics to distinguish it from larger initiative, adds 60 new services, HTTPS support

    Facebook Renames Its Controversial Internet.org App
    http://www.wired.com/2015/09/facebook-renames-controversial-internet-org-app/

    Facebook is rebranding its most prominent—and controversial—effort to connect the unconnected. Today the company said it will change the name of its Internet.org app and mobile website, now available to mobile phone users in 18 countries, to Free Basics by Facebook.

    The change is intended to better distinguish the app and website from Internet.org, the larger initiative that spawned it and is incubating many technologies and business models to help get the web to new users faster. The rebranding announcement comes days before Indian Prime Minister Narendra Modi is set to visit Facebook’s campus

    Free Basics, née Internet.org, has faced a global backlash that began in India last April.

    The criticism gained momentum in May when nearly 70 advocacy groups released a letter to Zuckerberg protesting Internet.org, arguing it violated net neutrality principles and stirred security concerns.

    In a blog post and a video, Facebook founder Mark Zuckerberg defended the program, saying it didn’t block or throttle services and therefore didn’t conflict with net neutrality. He said it cost too much to make the entire Internet available to everyone; Facebook’s approach was an economically viable way to bring the Internet to people who wouldn’t otherwise have it. “Net neutrality should not prevent access,” he said in a seven-minute video he made in May. “It’s not an equal Internet if the majority of people can’t participate.”

    Opening Up

    Though Zuckerberg defended the spirit of the program, the company also worked quickly to address concerns about equal access, privacy, and security.

    These moves are classic Zuckerberg. Caught slightly off-guard by the backlash, he has moved quickly to address critics’ concerns.

    Reply
  9. Tomi Engdahl says:

    I don’t have a mobile performance problem, I have a CDN
    http://www.twinprime.com/i-dont-have-a-mobile-performance-problem-i-have-a-cdn/

    “I don’t have mobile performance problem; I have a CDN” is something I hear from prospective customers frequently. My answer to this is always a two part question, “So what apps do you think are slow on your phone, or what apps have you heard your friends complain about?” And then I hear them list a bunch of apps. Almost every app gets mentioned – from slow image loading on Facebook and Instagram, to links to articles not loading on Twitter, or how long Amazon or Walmart took to display a product search query–the list never ends!. The second question I ask is “And don’t you think all of these apps use a CDN?” The response is always one of surprise!

    Something is different about mobile. The same app will work flawlessly and fast sometimes, and it will be slow at other times.

    Today, almost every app uses a CDN, yet the inconsistent mobile performance problem persists because 70% of the latency in mobile occurs in the wireless last mile. CDNs were built for solving the “first mile” origin server to network edge latency problem, and they do a good job. The mobile performance problem is in the last mile, so only using CDNs is not going to be of much help. That’s why apps using CDNs (like Facebook, and your app) are still slow and inconsistent.

    Mobile is the most diverse computing medium we have encountered. CDNs are ill-equipped to deal with this diversity because they mainly rely on using the same optimizations for all operating conditions

    Network

    In the wired Internet, the edge of the network is usually tens of miles away from us. When we access the New York Times from our laptops, the content is being pulled from the closest CDN data center, usually a few miles away. In the mobile cellular network, the edge of the network is usually the core of the Internet. The cellular connection from our device is tunneled (over GTP) to centralized locations in AT&T or Verizon’s network. The tunnels are then terminated, and the traffic becomes IP and enters the Internet. All cellular operators have only a handful of these centralized locations (GGSN, P-Gateway)–rarely over a dozen. This means that mobile networks are more centralized, and the notion that we are going to our closest CDN doesn’t hold true.

    Summary

    With the growth of mobile, there has been a fundamental shift in network architecture, content consumption, and the physics of the network. CDNs were not built to solve these challenges

    CDNs are great at solving some complex engineering challenges like scaling, content distributions, providing geo footprint, etc, but they do not solve mobile performance challenges

    Reply
  10. Tomi Engdahl says:

    North America Just Ran Out of Old-School Internet Addresses
    http://www.wired.com/2015/09/north-america-just-ran-old-school-internet-addresses/

    Every computer, phone, and gadget that connects to the Internet has what’s called an Internet Protocol address, or IP address—a kind of numerical name tag for every device online. And the Internet is rapidly running out of the most commonly used type of IP address, known as IPv4. Today, the American Registry for Internet Numbers (ARIN), the organization responsible for issuing IP addresses in North America, said that it has run out of freely available IPv4 addresses.

    That won’t affect normal Internet users, but it will put more pressure on Internet service providers, software companies, and large organizations to accelerate their migration to IPv4’s successor, IPv6.

    Yes, this news may sound familiar. WIRED reported back in 2011 that the Internet had run out of IP addresses

    ARIN president John Curran explains that the organization isn’t entirely out of IPv4 addresses. Some are set aside for specific purposes, such as the exchange sites where connections between different Internet service providers’ networks meet. But providers that want new IP addresses will have to settle for IPv6 numbers unless old, unused IPv4 addresses are returned to the organization.

    ARIN IPv4 Free Pool Reaches Zero
    https://www.arin.net/announcements/2015/20150924.html

    Reply
  11. Tomi Engdahl says:

    Google ‘cubists’ fix bug in Linux network congestion control, boost performance
    It’s a wonder the ‘net works at all, really
    http://www.theregister.co.uk/2015/09/29/google_cubists_fix_congestion_control_for_faster_tcp/

    A bit of “quality, non-glamorous engineering” could give a bunch of Linux servers a boost by addressing an unnoticed bug in a congestion control algorithm.

    The patch was provided by Googlers in the Chocolate Factory’s transport networking team, with contributions from Jana Iyengar, Neal Cardwell, and others.

    It fixes an old flaw in a set of routines called TCP CUBIC designed to address the “slow response of TCP in fast long-distance networks,” according to its creators.

    Like any congestion control algorithm, TCP CUBIC makes decisions based on congestion reports: if the network becomes jammed with traffic, hosts are told to slow down.

    As Mozilla developer Patrick McManus explains here, the bug was simple: TCP CUBIC interprets a lack of congestion reports as an opportunity to send data at a faster rate.

    That condition could, however, arise merely because the system hasn’t been getting any congestion reports.

    What’s supposed to happen in congestion control is that the operating system starts sending data slowly, increases its transmission rate until the network says “that’s enough”, and then backs off.

    The bug in TCP CUBIC fools the system into thinking it has a clear run at the network and should transmit at the maximum possible rate, crashing into other traffic, and ruining performance and efficiency.

    “The end result is that applications that oscillate between transmitting lots of data and then laying quiescent for a bit before returning to high rates of sending will transmit way too fast when returning to the sending state,” McManus explained.

    That condition could be quite common, he notes. A server may have sent a short burst of data over HTTP containing a web form for someone to fill out, and go quiet waiting for a response, then assume there’s no congestion, and burst out of the blocks at top-rate when it gets the user’s response.

    “A far more dangerous class of triggers is likely to be the various HTTP based adaptive streaming media formats where a series of chunks of media are transferred over time on the same HTTP channel”, McManus added.

    That’s why a fix for the ancient bug could be important: Linux is used in many media servers, and for the last decade, an important chunk of congestion control hasn’t been working quite right.

    tcp_cubic: better follow cubic curve after idle period
    https://github.com/torvalds/linux/commit/30927520dbae297182990bb21d08762bcc35ce1d

    Reply
  12. Tomi Engdahl says:

    When OM3 almost didn’t happen
    http://www.cablinginstall.com/articles/print/volume-23/issue-9/departments/editorial/when-om3-almost-didn-t-happen.html?cmpid=EnlCIMSeptember282015&eid=289644432&bid=1187470

    While digging around for information on wideband multimode fiber (WBMMF) for an article in this issue, I was told a story I hadn’t heard before concerning OM3 fiber and how it almost didn’t come to be.

    When the IEEE’s P802.3ae Task Force was in the throes of developing the 10-Gigabit Ethernet specifications more than a decade ago, a certain camp within the group advocated for singlemode fiber as the only optical media type to be recognized. These contributors didn’t want there to be a short-wavelength, multimode-fiber-supported flavor of 10-Gbit Ethernet. And they almost got their wish based on the manner in which standards bodies like the IEEE and the INCITS (which develops Fibre Channel specifications) cooperate with other standards bodies like the IEC or the TIA.

    IEEE 802.3ae, 10-Gigabit Ethernet, was nearly all the way through its approval process while the OM3 fiber specification remained an unfinished project. By just about the width of a human hair, the producers of the OM3 fiber specification completed their project approximately one meeting cycle before the IEEE’s “drop dead” date by which it was required in order to be referenced in 802.3ae.

    Since being introduced to the marketplace, OM3 frequently has been called “laser-optimized multimode” because it is optimized to support the transmission of signals generated by vertical cavity surface emitting lasers (VCSELs) at and very near the 850-nanometer window of operation. The farther you move away from that 850-nm range, the less able OM3 is to support long-distance transmission. The same is true of the higher-bandwidth OM4 multimode.

    Wavelength-division multiplexing is coming to short-wave transmission and will operate roughly between 850 and 950 nm. It’s why WBMMF is under development.

    Reply
  13. Tomi Engdahl says:

    Applications for 28-AWG twisted-pair cabling systems
    http://www.cablinginstall.com/articles/print/volume-23/issue-9/features/installation/applications-for-28-awg-twisted-pair-cabling-systems.html

    Most of the copper patch cords found in telecommunications rooms (TRs) today are constructed with 24-AWG copper conductors. With copper pairs of this size, a typical Category 5e cord is approximately 0.215 inches in outer diameter (OD); a typical Category 6 cord is 0.235 inches in OD; and a typical Category 6A cord is 0.275 inches in OD. By contrast, cords that use 28-AWG copper wires are significantly smaller in diameter. A Category 5e cord with 28-AWG wires is 0.149 inches in OD (48 percent the size of a typical Category 5e cord with 24-AWG wires); a Category 6 cord with 28-AWG wires is 0.15 inches in OD (41 percent the size of a typical 24-AWG Category 6 cord); and a Category 6A cord with 28-AWG wires is 0.185 inches in OD (45 percent the size of a typical 24-AWG 6A cord). In all these cases, the use of 28-AWG rather than 24-AWG wires reduces the cord’s size by more than 50 percent. In this article I will detail the advantages that come with the use of 28-AWG cabling – in telecom rooms as well as in data centers – in addition to other considerations, to help you understand and decide if 28-AWG cabling is a good fit for your situation.

    Reply
  14. Tomi Engdahl says:

    Operators call business go lost billions

    Have you played with Skype? Or WhatsApp? In case you’re running the operator’s precious income. Juniper predicts that such web site operate OTT calls (Over the Top) will take billions of euros each year operators.

    By 2020, OTT telephony market is growing at more than 10 billion dollars. This means that they use five-fold over the next five years.

    Juniper Networks, the operators of the problem will only grow more common 4G networks. When Whatsapp, a video runs smoothly over the network, why anyone would call a toll via the operator’s network? The fact is that very few OTT provider has managed to create any genuine net sales calls in.

    The operators did not remain unarmed. They will develop their own race OTT services. Usually this means VoLTE calls a fully IP-based voice over your network. Or else they offer their subscribers the possibility to use a Wi-Fi phone calls at home.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3381:operaattorien-puhelubisneksesta-haviaa-miljardeja&catid=13&Itemid=101

    Reply
  15. Tomi Engdahl says:

    Verisign opens up its DNS
    Free for ordinary users, promises not to harvest your requests
    http://www.theregister.co.uk/2015/09/30/verisign_opens_up_its_dns/

    Verisign is throwing its hat into the “free DNS” ring, promising not to retain information about recursive requests to its just-launched service.

    Verisign Public DNS is at 64.6.64.6 / 64.6.65.6, alas nowhere near as easy for people to remember as Google’s 8.8.8.8 / 8.8.4.4.

    In the blog post launching the service, the director of product management for the service Michael Kaczmarek says most people don’t understand that their recursive DNS requests can be, and routinely are, harvested, stored, mined and “sold to the highest bidder”.

    There’s also the practice of redirecting failed DNS queries, which regularly becomes a sore point for in-the-know Internet users.

    Reply
  16. Tomi Engdahl says:

    All-optical switching in transparent networks: challenges and new implementations
    http://www.edn.com/design/analog/4440369/All-Optical-switching-in-Transparent-Networks–Challenges-and-new-implementations?_mc=NL_EDN_EDT_EDN_today_20150924&cid=NL_EDN_EDT_EDN_today_20150924&elq=067d1f5a48cd433c8c28fba75133dd04&elqCampaignId=24908&elqaid=28264&elqat=1&elqTrackId=7debe33c0057448b8bb3699e88dc7222

    Modern optical communications emerged with the development of both a powerful coherent optical source that could be modulated (lasers [1]) and a suitable transmission medium (optical fibers [2]). Expressed in terms of analog bandwidth, a 1nm waveband translates to a bandwidth of 178GHz at 1300nm and 133GHz at 1500nm. Thus, optical fibers have a total usable bandwidth of approximately 30THz. Assuming the ubiquitous on-off keying format which has a theoretical bandwidth efficiency of 1bps/Hz, one can expect a 30Tbps digital bandwidth if fiber nonidealities are ignored.

    Given the immense potential of optical fibers, it comes as no surprise that they are predominantly replacing copper as the transmission medium of choice, vastly increasing single-link bandwidth in the process.

    the past decade has witnessed a networking paradigm shift from connection-oriented communication to high-bandwidth IP-centric, packet-switched data traffic. All this traffic is driven by the influx of high-bandwidth applications [3] which have caused an insatiable demand for increased data rates in optical long-haul communications. [4]

    The availability of such high-bandwidth applications relies heavily upon the ability to transport data in a fast and reliable manner without significantly increasing operating and ownership costs.

    Transparency

    One can define network transparency based on the parameters of the physical layer (e.g., bandwidth, signal-to-noise ratio). It can also be the measurement of the signals remaining in the optical domain, as opposed to those interchanging between the optical and electronic domains. Transparency can also mean the type of signals that the system supports, including modulation formats and bit rates. Given all these considerations, a transparent, all-optical network (AON) is commonly defined as one where the signal remains in the optical domain throughout the network. Transparent networks are attractive due to their flexibility and higher data rate. In contrast, a network is considered opaque if it requires its constituent nodes to be aware of the underlying packet format and bit rate.

    The lack of transparency is a pressing concern in current networks, as the need to handle data streams in the electrical domain engenders a large optical-electronic bandwidth mismatch. [5] The bandwidth on a single wavelength is 10Gbps (OC-192/STM-64) today and is likely to exceed 100Gbps (OC-3072/STM-1024) in the near future. Electronics will be hard pressed to keep pace with the optical data rate as it spirals upwards

    The advancement of device implementation technologies makes it possible to design AONs in which optical signals on an arriving wavelength can be switched to an output link of the same wavelength without conversion to the electronic domain. Signals on these AONs can be of different bit rates and formats, as they are never terminated inside the core network. This bit rate, format, and protocol transparency are vitally important in next-generation optical networks.

    Optical switches can be broadly classified as either opaque or transparent, depending on their implementation technologies.

    Opaque switches, also billed as optical cross-connects (OCXs), convert the incoming optical signals into electrical form. The actual switching is then performed electronically using a switching fabric with the resultant signals converted back to optical form at the output.

    Transparent switches, also billed as photonic cross-connects (PCXs), do not perform any OEO conversions. This allows them to function independent of the data type, format or rate, albeit only over a range of wavelengths termed the passband. Viable PCX technologies should demonstrate superiority in switching speed, extinction ratio, scalability, insertion loss (IL), polarization-dependent loss (PDL), crosstalk, and power consumption.

    Microelectromechnical systems (MEMS) are a powerful means of implementing optical switches because a MEMS system uniquely integrates optical, mechanical, and electrical components on to a single wafer. MEMS switches use micromirrors that redirect light beams to the desired output port.

    The 2D switches are easier to control and have more stringent tolerances, but do not scale up as well due to optical loss. The 3D switches alleviate the scalability problem by allowing movement on two axes but, consequently, have much tighter tolerances.

    Acousto-optical (AO) switches use ultrasonic waves travelling within a crystal or planar waveguide that deflects light from one path to another

    Electro-optical (EO) switches take advantage of the changes in the physical properties of materials when a voltage is applied. These switches have been implemented using liquid crystals, switchable waveguide Bragg gratings, semiconductor optical amplifiers (SOAs), and LiNbO3

    Semiconductor optical amplifier (SOA)-based switches also suffer from a limited dynamic range, potentially creating cross-modulation and inter-modulation.

    Thermal-optical (TO) switches are based on either the waveguide thermo-optic effect or the thermal behavior of materials.

    Magneto-optic (MO) switches are based on the Faraday rotation of polarized light when it passes through a magneto-optic material in the direction of an applied field. [23] Changing the polarization of an electromagnetic wave is an indirect method of controlling the relative phase of its constituent orthogonal components.

    Reply
  17. Tomi Engdahl says:

    Optical cages cover today’s highest signal speeds
    http://www.edn.com/electronics-products/electronic-product-reviews/other/4440417/Optical-cages-cover-today-s-highest-signal-speeds?_mc=NL_EDN_EDT_EDN_productsandtools_20150928&cid=NL_EDN_EDT_EDN_productsandtools_20150928&elq=5cd725211cbf412d86d03308fc038ced&elqCampaignId=24960&elqaid=28321&elqat=1&elqTrackId=3c83190b60d3478b8661c5f222deecb6

    If there’s one sure thing about data communications, it’s that there’s never enough bandwidth. Today’s 10 Gbps optical links are stressed trying to deliver data. While 100 Gbps links (4×25 Gbps) are available, the optics can be too expensive for many companies. The single-lane 25 Gbps link (IEEE 802.3by) can bridge the gap between speed and cost. It’s 2.5 times faster than 10 Gbps, but the installation costs are lower than those of 100 Gbps links.

    The zQSFP+ family of optical cages from TE Connectivity mount on boards and on cables. They fill the bill in the cost-versus-speed issue. The cages are available in multitude of configurations, from a single 1×1 housing to 1×6 or stacked from 2×1 to 2×3.

    zQSFP+ from TE Connectivity” /> cages are backward compatible with QSFP/QSFP+ 100 Gbps (4×25 Gbps) optics. Thus, they can be used for designing line cards with either speed. You can use these cages with Fibre Channel or InfiniBand systems as well. zQSFP+ cages can provide extra electrical margin for 10 Gbps and 16 Gbps applications.

    Reply
  18. Tomi Engdahl says:

    System Status as SMS Text Messages
    http://www.linuxjournal.com/content/system-status-sms-text-messages

    Let’s Talk about Text Messages

    I was watching the Apple introduction of its new Apple Watch and was struck by the fact that like a few of the high-end Android smart watches, it will show you the entirety of e-mail and text messages on the tiny watch screen. This means it’s a great device for sysadmins and Linux IT folk to keep tabs on the status of their machine or set of machines.

    Sure, you could do this by having the system send an e-mail, but let’s go a bit further and tap into one of the e-mail-to-SMS gateways instead. Table 1 shows a list of gateway addresses for the most common cellular carriers in the United States.

    For example, I can send a text message to someone on the AT&T network with the number (303) 555-1234 by formatting the e-mail

    Armed with this information, there are a lot of different statuses that you can monitor and get a succinct text message if something’s messed up.

    What Else Could You Monitor?

    Tracking load average is rather trivial when you think about all the many things that can go wrong on a Linux system, including processes that get wedged and use an inordinate amount of CPU time, disk space that could be close to filling up, RAM that’s tapped out and causing excessive swapping, or even unauthorized users logging in.

    All of those situations can be analyzed, and alerts can be sent to you via e-mail or SMS text messag

    Reply
  19. Tomi Engdahl says:

    SS: Professor wonders of the Finnish high-speed broad bands: “300 Mbps speed is 290 megabytes too much”

    Networking Technology Professor Jukka Manner from Aalto University is surprised Savon Sanomat, Finnish broadband subscriptions sold to consumers. Manner of the current, up to 300 megabits per second broadband connections yltävistä does not just benefit the consumer.

    “For example, watching a video or Netflix Yle Areena HD quality requires only 10 Mbps download speed. 300 Mbps speed is 290 megabytes too much, because the image does not come any better. Next headache for video-quality of all 4K do not need the 20 to 30 megabits, but not 300 megabits or even 50 megabits, “Manner says.

    Fully Vanities no high-speed network, however. Continental points out that the benefits obtained when the connection is shared between several users.

    Source: http://www.tivi.fi/Kaikki_uutiset/ss-professori-ihmettelee-suomen-nopeita-laajakaistoja-300-megan-nopeudessa-on-290-megaa-liikaa-6001501

    Reply
  20. Tomi Engdahl says:

    Mike Dano / FierceWireless:
    AT&T testing wireless broadband network tech to serve rural areas at speeds of 15-25 Mbps

    AT&T testing fixed wireless local loop services with speeds of 15-25 Mbps
    http://www.fiercewireless.com/story/att-testing-fixed-wireless-local-loop-services-speeds-15-25-mbps/2015-10-01

    AT&T (NYSE: T) said it is currently testing fixed wireless local loop (WLL) technology in select areas of the country with local residents who want to try the service, including in Alabama, Georgia, Kansas and Virginia, and is seeing speeds of around 15 to 25 Mbps.

    “Our innovative fixed wireless program that delivers broadband through the air using base stations and fixed antennae on customers’ homes or buildings can be a way to deliver high quality, high-speed Internet access service to customers living in rural areas,” the carrier told FierceWireless.

    The carrier said the operation is part of its work with the FCC’s Connect America Fund, where it said it will provide connectivity to over 1 million locations with speeds of at least 10 Mbps down and 1 Mbps up.

    Reply
  21. Tomi Engdahl says:

    In AT&T’s initial fixed WLL proposal, the carrier said its fixed WLL technology would make use of its wireless spectrum and LTE infrastructure through a 20 MHz (10×10 MHz paired uplink and downlink) configuration.

    The carrier added that the technology will even provide customers on the cell edge speeds faster than 10 Mbps more than 90 percent of the time.

    AT&T noted that, unlike with its mobile wireless service, its fixed WLL service will require a technician to install a fixed WLL receiver at each customer’s home.

    source: http://www.fiercewireless.com/story/att-testing-fixed-wireless-local-loop-services-speeds-15-25-mbps/2015-10-01

    Reply
  22. Tomi Engdahl says:

    Mellanox Targets Telcos with EZchip Buy
    http://www.eetimes.com/document.asp?doc_id=1327871&

    Networking chip vendor Mellanox Technologies Ltd. said Wednesday (Sept. 30) it signed a definitive agreement to acquire fellow Israeli chip firm EZchip Semiconductor, a provider of network processing chips, for roughly $811 million in cash.

    Mellanox said the deal would enhance its ability to provide intelligent interconnect and processing chips to data centers and wide area networks. The deal will also increase the size of Mellanox’s total available market to $14 billion by 2017, the company said.

    Eli Fruchter, CEO of EZchip, said the deal is about diversification and synergy amid a whirlwind of consolidation in the semiconductor space. “We believe that in the semiconductor space, larger bigger companies will do better,” Fruchter said. “We see that the synergy with Mellanox makes a lot of sense.”

    The semiconductor industry is currently undergoing an unprecedented degree of consolidation as chip vendors maneuver to increase their scale, grow sales and expand their offerings into other markets. According to market research firm IC Insights Inc., the value of semiconductor industry M&A deals in the first six months of 2015 alone totaled about $72.6 billion, more than six times the annual average for M&A deals struck during the five previous years.

    While Mellanox provides technology for network layers 1, 2 and 3, EZchip offers products for layers 3 through 7, Fruchter said. And while Mellanox currently sells mainly to data centers, EZchip sells products mostly to carrier networks, he added.

    “If you combine the layers and customers and market segments, we will be able to sell products that range from layer 1 to 7 to customers in the carrier and data center space,” Fruchter said. “That makes a lot of sense from the synergy perspective.”

    Mellanox is also seeing a “sort of merger” where cloud and data center technologies are starting to make inroads into telco environments, Deierling said. “We are seeing new technologies where people are embracing cloud architectures, trying to achieve the same efficiencies and agility and ability to manage large scale infrastructure and deliver new services very quickly,” he said. “That’s something that we are very good at in working with our cloud vendors, and we are seeing the evolution of the telco market to embrace those sort of architectures. We think the combination of the two companies puts us in a really unique position to address that evolution of the telco towards more of a cloud data center model.”

    Reply
  23. Tomi Engdahl says:

    Netgear router is already too fast

    5.3 gigabits per second. A theoretical data transfer rate Netgear promises a new Wi-Fi router Nighthawk X8 AC5300. Eight of the antenna brought on by the band is so broad, that to a not correct any need. At least not yet.

    The current Wi-Fi a basic one reaches easily 300 Mbps data rate

    Netgear novelty is more geared to the needs of the future. It is a Wi-Fi router within the meaning of the world’s fastest private use. The device is in fact the first to support AC5300 technology.

    This means in practice 1000 + 2166 + 2166 Mbps aggregated bandwidth.

    Nighthawk X8 has eight antennas – four internal and four external active antenna – which will increase network range and speed. The router also supports the MU-MIMO techniques (Multi-User Multiple-Input, Multiple-Output) as well as Quad Stream all frequency bands, that is, the simultaneous wireless transmission of four different device for each lane.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3393:netgearin-reititin-on-jo-liian-nopea&catid=13&Itemid=101

    Reply
  24. Tomi Engdahl says:

    Read our lips, no more EU roaming charges*
    *Post June 2017, under cap defined by ‘fair use’ policy – EC
    http://www.theregister.co.uk/2015/10/05/detail_eu_roaming_fair_use_policy_european_commission/

    The European Commission will draw up rules to help mobile network operators set limits on the amount of roaming they will allow their customers to engage in before they can apply charges to the activity, the EU’s Council of Ministers has said.

    The Council has formally approved new EU rules that will generally bring an end to roaming charges – the fees applied to the use of mobile data services by consumers when abroad – from 15 June 2017.

    However, under the reforms, mobile network operators will be able to charge consumers that exceed a “fair use” cap on the use of mobile data services abroad.

    the Commission will be responsible for outlining specific rules on the “fair use” cap

    “Roaming providers will be able to apply a ‘fair use policy’ to prevent abusive or anomalous usage of regulated retail roaming services,” the Council said. “Once the fair use policy has been exceeded, a surcharge may be applied. The surcharge cannot be higher than the maximum wholesale charges. The detailed rules on the application of the fair use policy will be defined by the Commission in an implementing act by 15 December 2016.”

    The Council explained that the 15 June 2017 deadline for ending roaming charges, subject to the fair use and costs exceptions

    Rules to cap roaming charges at levels currently lower than can be applied will come into force 30 April 2016 under the new regime, it said.

    http://data.consilium.europa.eu/doc/document/ST-10788-2015-ADD-1/en/pdf

    Reply
  25. Tomi Engdahl says:

    Facebook, French sat operator in SCRAMBLE FOR AFRICA
    Hello Sub-Saharans, this is Zuck calling
    http://www.theregister.co.uk/2015/10/05/facebook_eutelsat_africa_deal/

    Facebook has inked a deal with Eutelsat Communications to beam internet access to parts of Sub-Saharan Africa.

    Financial terms of the agreement were kept secret.

    Eutelsat said this morning that the companies had signed a “multi-year” pact with Spacecom to “utilise the entire broadband payload on the future AMOS-6 satellite”, with the service expected to go live in the second half of next year.

    The free content ad network and French satellite operator plan to build a system made up of sat capacity, gateways and terminals.

    Facebook said it wanted to help to bring broadband access to “unconnected populations” as part of the firm’s Internet.org initiative.

    Last month, however, Facebook was forced to rebrand the services it was offering under Internet.org, after it became clear that very few websites would be served to parts of the world where broadband connectivity remains virtually non-existent.

    Reply
  26. Tomi Engdahl says:

    Rachel King / ZDNet:
    HP unveils OpenSwitch, a Linux-based open source network operating system, and a developer community

    HP launching open source network OS, OpenSwitch dev community
    http://www.zdnet.com/article/hp-launching-open-source-operating-system-openswitch-dev-community/

    The OpenSwitch community, opening up to the public today, is being backed by a number of other open source proponents as well, including Broadcom, Intel and VMware, among others.

    Hewlett-Packard is expanding its outreach to the open source community with a new initiative and networking operating system to fuel new data center technologies.

    The Linux-based, open source network operating system (NOS) was designed to counter “traditional networking” models, which HP derided in Monday’s announcement as typically closed and proprietary, among other archaic practices.

    Thus, HP’s new source code is meant to enable developers to build data center networks customized for prioritizing business apps, workloads and functions.

    To get the ball rolling, HP Networking’s new OpenSwitch community has been set up as a virtual hub for developers, encouraging discussion and collaboration on creating secure network operating systems grounded in open and common industry standards.

    The OpenSwitch community, opening up to the public today, is being backed by a number of other open source proponents as well, including Broadcom, Intel and VMware, among others.

    The initial developer release for the OpenSwitch NOS also available now, supported by HP’s Altoline open network switches.

    Reply
  27. Tomi Engdahl says:

    Strike up the bandwidth!
    http://www.edn.com/design/systems-design/4440483/Strike-up-the-bandwidth-?_mc=NL_EDN_EDT_EDN_today_20151005&cid=NL_EDN_EDT_EDN_today_20151005&elq=ac81411ab71d4479a793a0ef4d0fb512&elqCampaignId=25055&elqaid=28463&elqat=1&elqTrackId=26b6bd968182431987f3079c8332b535

    While everyone agrees that one of the most pressing needs in the technology space is “How do we get to next-generation high-speed data transfer rates?” there are differing opinions as to how to accomplish this. There are even differing opinions regarding where we currently are in this process. Some companies claim that they are struggling just to get to 28 Gbps products, others say they are comfortable with their 28 Gbps technology solutions, while still others claim they have left 28 Gbps in the dust and are (data) streaming along at 56Gbps. While there may not be an exact concurrence as to where we as a hardware industry stand relative to high-speed data transfer rates, there are some givens.

    The first given is that even if we are successfully achieving information transfer rates of 28 Gbps, as an industry we have to accept that even with the best materials available today we can just barely get to 56 Gbps, which is the next level on the data transfer rate ladder.

    For my own edification, I did insertion loss plots for a typical long-reach backplane with various materials

    PTFE is so cost-prohibitive that it is not a viable solution

    The reality is that we have come a long way from FR-4 laminates to where we are now with far more sophisticated materials such as Isola’s Tachyon 100G laminate. Materials such as Tachyon 100G have gotten us this far to 28 Gbps and will likely get us to 56 Gbps for short and mid-reach systems.

    The second given is that we cannot grow bandwidth without optics. Optical systems have nearly unlimited bandwidth, but the pure and simple matter is it is difficult, if not at times nearly impossible, to put the number of optical connections required on a printed circuit board to replace the aggregate bandwidth of which copper traces are capable. Embedded silicon photonics may be the answer for the future, but with silicon photonics everything matters—the materials, the way engineers design boards and the way in which these boards are fabricated.

    In about 20 years, I think we will have silicon photonics printed circuit boards in volume production, but probably not much sooner than that.

    The third given is that we need a bridge technology that will enable us to get from the PCB solutions of today to the silicon photonics products of the future. That bridge is actually pretty simple to build and uses technology already in place— copper cables. At Samtec, we have our own in-house cable manufacturing plant, and we are able to produce very, very small, thin and flexible twinax and micro coax cables. The bandwidth achievable with these cables is an order of magnitude in difference over what is possible with current PCB technology.

    The standard design requirement for enterprise-class equipment, such as that available from Cisco or Juniper, needs to be continuously upgradeable for 10-15 years and go through three generations of bandwidth increase. We can address this need with the PCB technology we have right now and may even be able to achieve the next generation after that. Long term, we will hit the wall, so we need to find an interim (bridge) solution.

    Of particular note, when we use copper cable on PCBs in lieu of traces, the design rules are easy. All we need to account for is the skew of the cable (as opposed to the skew resulting from the glass weave in PCBs).

    With the use of copper cables on boards, we don’t need complex, high-priced materials.
    With the use of the less pricey materials on PCBs with copper cables, we are able to show that we can go faster for less cost.

    Bottom line: we are currently at a crossroads in the industry. The PCB technology that is currently in use is 30 years old. Prior to PCBs, there was wire-wrap or multi-wire technology. The ability to create PCBs really happened about 40 years ago. It took 20 years for us to really be able to fully utilize PCB technology. Then, it took 20 years for us to reach the limits of PCB technology.

    Reply
  28. Tomi Engdahl says:

    Scalable Remote Spectrum Monitor
    http://www.eeweb.com/news/scalable-remote-spectrum-monitor

    Anritsu Company announced the release of Remote Spectrum Monitor, a platform of modular and scalable products that helps operators generate a greater return on their multi-billion dollar spectrum investments and maximizes network capacity to meet consumer demand. Designed without a display or keyboard, Remote Spectrum Monitor automates the method of conducting radio surveillance, interference detection, and government spectrum policy enforcement while bringing greater flexibilities and cost efficiencies to network management.

    “Operators have asked for a scalable and integrated solution to meet the test needs of their wireless network from a performance and cost perspective. Additionally, government regulators are often interested in monitoring the spectrum for illegal or unlicensed broadcasts. Anritsu addresses these requirements with Remote Spectrum Monitor, a bundled solution that provides an efficient platform that is flexible and expandable to evolve along with networks,” said Gerald Ostheimer, Anritsu Corporate Vice President/General Manager. “Remote Spectrum Monitor leverages Anritsu’s wireless field test expertise and our network monitoring proficiency gained through our market-leading MasterClaw Monitoring™ tool to create an innovative solution to help optimize network operation.”

    Reply
  29. Tomi Engdahl says:

    World’s First 5G Field Trial Delivers Speeds of 3.6Gbps Using Sub-6GHz
    http://mobile.slashdot.org/story/15/10/08/2322259/worlds-first-5g-field-trial-delivers-speeds-of-36gbps-using-sub-6ghz?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Global Chinese ICT firm Huawei and Japanese mobile giant NTT DOCOMO today claim to have conducted the world’s first large-scale field trial of future 5th generation (5G) mobile broadband technology, which was able to deliver a peak speed of 3.6Gbps (Gigabits per second).

    Previous trials have used significantly higher frequency bands (e.g. 20-80GHz), which struggle with coverage and penetration through physical objects. By comparison Huawei’s network operates in the sub-6GHz frequency band

    First Large-Scale Field Trial of 5G Mobile Delivers Speed of 3.6Gbps
    Posted Thursday, October 8th, 2015 (1:02 pm) by Mark Jackson (Score 550)
    http://www.ispreview.co.uk/index.php/2015/10/first-large-scale-field-trial-of-5g-mobile-delivers-speed-of-3-6gbps.html

    Chinese firm Huawei and Japanese telecoms giant NTT DOCOMO have conducted the world’s first large-scale field trial of next generation 5G mobile broadband technology using the sub-6GHz band, which has been able to achieve an impressive peak speed of 3.6Gbps (Gigabits per second).

    Admittedly the International Telecommunication Union‘s (ITU) related IMT-2020 standard has already defined the top speed that 5G should aim to achieve as 20Gbps, which is still much more than the 3.6Gbps delivered above, but it’s a bit more complicated than that.

    The field trial itself was conducted at an outdoor test site in Chengdu (China) and made use of several new air interface technologies, such as Multi-User MIMO (concurrent connectivity of 24 user devices in the macro-cell environment), Sparse Code Multiple Access (SCMA) and Filtered OFDM (F-OFDM).

    The news is impressive, although it’s worth noting that we’re not given any information about distance (i.e. how far the signal travelled in order to achieve the above speeds) and that’s a crucial consideration.

    At the same time sub-6GHz is good, but it’s still a long way off the more familiar 800MHz to 3.6GHz bands that are so often used by current generation 4G (LTE) technologies in the United Kingdom.

    In keeping with that the UK telecoms regulator, Ofcom, is currently only working to identify spectrum between 6GHz and 100GHz for use by 5G services

    Reply
  30. Tomi Engdahl says:

    BT to shoot ‘up to 330Mbps’ G.fast into 2,000 Gosforth homes
    Fast times in Newcastle
    http://www.theregister.co.uk/2015/10/09/gfast_to_be_trialled_in_2000_gosforth_homes/

    BT customers in Gosforth, Newcastle, are being given a chance to test copper’s last hurrah, G.fast.

    The DSL standard G.fast is hoped to help the extension of broadband access to fibre-foiling locations. It has been cooked up in Suffolk by BT, Alcatel-Lucent and its Bell Labs research arm.

    BT claimed G.fast “eliminates the need to rewire entire buildings and homes, the most costly and time-consuming part of any fibre deployment”.

    Up to 2,000 homes in Gosforth will be included in a G.fast trial in the North East of England. The trial is set to provide the hitherto isolated region with data speeds BT reckoned will reach up to 330 Mbps, shooting down copper tubes and directly into humble Northern abodes.

    The announcement followed a “much smaller but successful pilot” that BT and Alcatel-Lucent conducted in Norfolk earlier this year.

    Reply
  31. Tomi Engdahl says:

    Four is the magic number for Ofcom CEO, raising fears for O2/3UK deal
    print print
    email email
    reprint reprint
    comment comment

    inShare

    Four is the magic number for Ofcom CEO, raising fears for O2/3UK deal
    By Mary Lennighan, Total Telecom
    Thursday 08 October 2015
    http://www.totaltele.com/view.aspx?ID=491375&G=1&C=3&Page=0

    Sharon White echoes Commissioner Vestager’s comments on need for four mobile network operators in any EU market.

    The head of the U.K.’s telecoms regulator this week revealed that she is concerned about the impact the proposed market consolidation will have on competition, and hinted that the country’s mobile sector would be better served by four network operators.

    “We continue to believe that four operators is a competitive number that has delivered good results for consumers and sustainable returns for companies,” White said, doubtless striking fear into the hearts of those involved in the 3/O2 deal.

    Further, she presented evidence to show that consumers could be suffering as a result of M&A in Europe.

    “According to the Austrian regulator, mobile prices have risen by 28% since the 2013 deal reduced the number of networks to three,” she said.

    Reply
  32. Tomi Engdahl says:

    BBC bypasses Linux kernel to make streaming videos flow
    The move to shunt TCP into userspace is gathering momentum
    http://www.theregister.co.uk/2015/10/12/linux_networking_api_showing_its_age/

    Back in September, The Register’s networking desk chatted to a company called Teclo about the limitations of TCP performance in the Linux stack.

    That work, described here, included moving TCP/IP processing off to user-space to avoid the complex processing that the kernel has accumulated over the years.

    It’s no surprise, then, to learn of other high-performance efforts addressing the same issue: both the BBC in its video streaming farms; and CloudFlare, which needs to deal with frequent packet flood attacks.

    The Beeb’s work is described by research technologist Stuart Grace here. The broadcaster explains that its high-definition video streams have to push out 340,000 packets per second into 4 Gbps ultra-high definition streams.

    With just 3 µs per packet of processing time, the post says, using the kernel stack simply wasn’t an option.

    Using the network sockets API, the post explains, involves a lot of handling of the packet, as “each data packet passes through several layers of software inside the operating system, as the packet’s route on the network is determined and the network headers are generated. Along the way, the data is copied from the application’s buffers to the socket buffer, and then from the socket buffer to the device driver’s buffers.”

    The Beeb boffins started by getting out of the kernel and into userspace, which let them write what they call a “zero-copy kernel bypass interface, where the application and the network hardware device driver share a common set of memory buffers”.

    CloudFlare: We wrote a new syntax, you won’t believe what happened next

    CloudFlare’s approach is similar – a userspace kernel bypass – but with wrinkles specific to its circumstances.

    CloudFlare’s problem is not just the quantity of packets, but the need to distinguish attack packets from user data. Regular readers of The Register will already know that the provider suffers regular attacks.

    As Gilberto Bertin writes: “During packet floods we offload selected network flows (belonging to a flood) to a user space application. This application filters the packets at very high speed. Most of the packets are dropped, as they belong to a flood. The small number of “valid” packets are injected back to the kernel and handled in the same way as usual traffic.”

    To get what it wanted, Bertin says, the company settled on writing modifications to the Netmap Project.

    Reply
  33. Tomi Engdahl says:

    Verizon increases the price of unlimited data plans by $20 a month
    http://www.engadget.com/2015/10/08/verizon-unlimited-data-price-increase/

    If you’re still on a grandfathered unlimited data plan with Verizon, your bill is about to go up. On November 15th, the carrier confirmed to Engadget that it’ll increase rates for those customers by $20 a month. The company says that less than one percent of its customers fall into the category of still having the old unlimited plan and aren’t currently under contract. Verizon also says that any user currently under contract with unlimited data will not see the price hike until their agreement is up for renewal. This follows Sprint’s recent announcement about an upcoming rate increase.

    Reply
  34. Tomi Engdahl says:

    DOCSIS 3.1 Remote PHY Design
    http://www.eeweb.com/news/docsis-3.1-remote-phy-design-enables-distributed-cable-networks

    Altera Corporation is attending the SCTE Cable Expo in New Orleans, October 13-16 (Booth #3135), to demonstrate a new, flexible and upgradeable silicon solution for multi-service operators (MSOs) that enables the next-generation of cable architectures at the same low power consumption levels as ASICs. The Altera DOCSIS Remote (MAC) PHY design, which is being demonstrated with partners Analog Devices, and Capacicom, enables cable operators to more efficiently and cost-effectively meet the ever-increasing need to segment cable networks, driven by the demand of high-speed Internet, unicast 4K video and other multimedia content.

    The solution uses Altera Arria® 10 FPGAs and Analog Devices’ class-leading digital-to-analog converters (DACs) and analog-to-digital converters (ADCs), combined with Capacicom’s MAC and PHY implementation, resulting in state-of-the-art radio frequency (RF) performance.

    Altera’s distributed CCAP architecture (DCA) solutions support both the legacy DOCSIS 3.0 standard and the new industry standard for cable, DOCSIS 3.1.

    Reply
  35. Tomi Engdahl says:

    Australian ISPs Not Ready For Mandatory Data Retention
    http://yro.slashdot.org/story/15/10/12/2258209/australian-isps-not-ready-for-mandatory-data-retention

    October 13 marks the day Australian ISPs are required by law to track all web site visits and emails of their users, but according to an article on the Australian Broadcasting Corporation’s news site the majority of ISPs are not ready to begin mandatory data retention.

    Majority of ISPs not ready for metadata laws that come into force today
    http://www.abc.net.au/news/2015-10-13/majority-of-isps-not-ready-to-start-collecting-metadata/6847370

    The vast majority of Australian internet service providers (ISPs) are not ready to start collecting and storing metadata as required under the country’s data retention laws which come into effect today.

    ISPs have had the past six months to plan how they will comply with the law, but 84 per cent say they are not ready and will not be collecting metadata on time.

    The Attorney-General’s department says ISPs have until April 2017 to become fully compliant with the law.

    ISPs ‘not given enough time’

    ISPs must start retaining metadata as of today unless they have been granted an extension, according to the Attorney-General’s Department.

    Extensions are granted after the ISPs submit a Data Retention Implementation Plan (DRIP) to the Government and have it approved.

    An extension gives the ISP a further 18 months to comply with the legislation.

    The survey found that while 81 per cent of ISPs say they have submitted a plan, only about 10 per cent have been approved so far.

    Mr Stanton said ISPs were not given enough time to get ready.

    “I think the survey shows that very clearly,”

    Small ISPs say regulations putting them out of business

    Craig runs a small ISP in regional Australia and his business will not be ready to collect metadata.

    “We’ve now reached 400 pages of this document [the DRIP]. It’s a very complicated process and it’s eating into our profitability,”

    “It’s such a complicated and fundamentally flawed piece of legislation that there are hundreds of ISPs out there that are still struggling to understand what they’ve got to do.”

    Reply
  36. Tomi Engdahl says:

    Which data centre network topology’s best? Depends on what you want to break
    Boffins beat up DCell and BCube to see what breaks
    http://www.theregister.co.uk/2015/10/13/which_data_centre_topology_is_best_depends_on_what_you_want_to_break/

    Which data centre topology is better is an arcane, vexed and vital question: after all, as any cloud users knows while they’re thumping the table/monitor/keyboard/whatever, we’ve gone long beyond a world where outages can be regarded as trivial.

    Researchers from France and Brazil reckon one important question is where you expect failures – in switches, or in data links.

    In this paper at Arxiv, the researchers led by Rodrigo de Souza Couto of the Universidade Federal do Rio de Janeiro compared how traditional three-layer, Fat-Tree, DCell and BCube architectures behave when things go wrong (compared to the traditional three-layer edge/aggregation/core model).

    The paper explains that these three topologies were chosen because they have this in common: they’re designed for the modern data centre design that combines modular infrastructure and low-cost equipment.

    Their conclusion is that if a link fails, BCube recovers better, but if a whole switch goes dark, DCell is better.

    Reliability and Survivability Analysis of Data Center Network Topologies
    http://arxiv.org/abs/1510.02735

    Reply
  37. Tomi Engdahl says:

    UK Broadband suffers £37.5m loss after big Relish investment
    Cost of building LTE 4G network, lack of take up hits Hong Kong-based telco hard
    http://www.theregister.co.uk/2015/10/13/uk_broadband_suffers_huge_loss_after_relish_investment/

    Hong Kong-based UK Broadband – the parent company of wireless broadband provider Relish – reported a loss of £37.5m for 2014.

    The firm warned that its current business plan, which relies on two key factors, remained uncertain and may need to be ripped up.

    UK Broadband, which is owned by Hong Kong’s telco PCCW Group, added that if it failed to secure the funds needed to deploy its LTE 4G and Wi-Fi network, alongside a lack of take up for its services from consumers, then the company’s licences and tangible assets may be impaired.

    Reply
  38. Tomi Engdahl says:

    NEC partners with Bristol to create the world’s first open programmable city NEC will provide insights, expertise and ICT technologies to enable UK city to develop a wide range of smarter transport, environmental, health and community services
    http://www.nec.com/en/press/201503/global_20150310_02.html

    NEC is already working with Bristol to virtualise and converge a new high capacity wireless and optical network to support a wider diversity of end-user needs in a highly efficient way. In the smart cities of the future, this is likely to include ultra-low latency connectivity for driverless cars, kilobits per second connectivity for M2M sensors to monitor the health of citizens with long-term chronic conditions, hundred megabits per second for ultra high definition TV broadcasts and terabits per second data transfers for collaborative R&D programmes between global universities.

    New services and applications will be trialled on the Bristol Is Open network platform as virtual tenants on pooled servers, eliminating stranded capacity and over-utilised bottlenecks commonly seen in data communication networks. Bristol will be able to create dynamic service chains to enable traffic to take the best path through the network depending on real-time demand and the specific requirements of each smart city service. By being able to easily up-scale and hibernate centralised server resources, Bristol will also be able to minimise energy usage and costs while maximising system resilience.

    World’s First Programmable City Arises, Built on Xilinx FPGAs
    https://forums.xilinx.com/t5/Xcell-Daily-Blog/World-s-First-Programmable-City-Arises-Built-on-Xilinx-FPGAs/ba-p/642709

    By 2050, the human population will have reached 9 billion people, with 75 percent of the world’s inhabitants living in cities. With already around 80 percent of the United Kingdom’s population living in urban areas, the U.K. needs to ensure that cities are fit for purpose in the digital age. Smart cities can help deliver efficiency, sustainability, a cleaner environment, a higher quality of life and a vibrant economy. To this end, Bristol Is Open (BIO) is a joint venture between the University of Bristol and Bristol City, with collaborators from industry, universities, local communities, and local and national governments. Bristol Is Open (www.bristolisopen.com) is propelling this municipality of a half million people in southwest England to a unique status as the world’s first programmable city.

    Bristol will become an open testing ground for the burgeoning new market of the Industrial Internet of Things—that is, the components of the smart-city infrastructure. The Bristol Is Open project leverages Xilinx All Programmable FPGA devices in many areas of development and deployment.

    PROGRAMMABLE CITY VS. SMART CITY: Smart cities aim to improve and enhance public and private service offerings to citizens in a more efficient and cost-effective way by exploiting network, IT and, increasingly, cloud technologies. To achieve this goal, smart cities rely extensively on data collected from citizens, the environment, vehicles and basically all the “things” present in the city. The more data that becomes available, the more accurately city operations can be analyzed, which in turn will lead to the design and availability of smart-city services.

    For the network infrastructure, citywide data retrieval and processing mean massive amounts of sensor data that needs to be collected, aggregated and transferred to computational facilities (data centers) for storage and possibly processing.

    Programmable networking technologies offer unique capabilities for raising the performance of smart-city operations.

    Software-defined networking (SDN) is one of the main enablers for programmable networks. The SDN foundation is based on decoupling infrastructure control from the data plane, which greatly simplifies network management and application development while also allowing deployment of generic hardware in the network for delivering networking functions.

    BIO aims to serve as a living lab—an R&D testbed targeting city-driven digital innovation

    At the infrastructure level, BIO comprises five distinctive SDN-enabled infrastructures:

    Active nodes as optoelectronic-network white boxes using FPGA programmable platforms and heterogeneous optical and Layer 2/3 networking infrastructure
    Heterogeneous wireless infrastructure comprising Wi-Fi, LTE, LTE-A and 60-GHz millimeter-wave technologies
    IoT sensor mesh infrastructure
    Network emulator comprising a server farm and an FPGA-SoC-network processor farm
    Blue Crystal high-performance computing (HPC) facility

    On the metro network, the infrastructure offers access to dynamic optical switching supporting multi-terabit/second data streams, multirate Layer 2 switching (1 to 100 GbE) and Layer 3 routing.

    The entire platform uses SDN control principles

    We are using Xilinx FPGAs that have evolved into system-on-chip (SoC) devices in multiple points within the BIO infrastructure: in active nodes as optoelectronic white boxes, emulation facilities, wireless LTE-A experimental equipment and IoT platforms. BIO uses programmable and customizable network white boxes that consist of programmable electrical (FPGA) and optical (switching, processing, etc.) parts. FPGAs offer several advantages, including hardware repurposing through function reprogrammability, easier upgradability and shorter design-to-deploy cycles than those of application-specific standard products (ASSPs).

    Reply
  39. Tomi Engdahl says:

    Google says Loony broadband balloons are ‘nearly perfect’
    Alphabet subsidiary courting African carriers and confronting comms biz realpolitik
    http://www.theregister.co.uk/2015/10/12/nearly_perfect_loon_balloons_courting_african_carriers/

    Google is promising that its Project Loon balloon-broadband initiative is ready to go and wants to announce its first operator partners in Africa soon.

    Last Friday, Google [x] regional business lead Wael Fakharany told the GSMA Mobile 360 conference in South Africa the company has “almost perfected” the Loon technology.

    Fakharany said the Alphabet subsidiary’s subsidiary thinks it is now “time to scale’” the technology in Africa.

    Turning the project from an experiment into a live service demands access to spectrum and customers on the ground, and for that, Fakharany said the organisation is working with operators in the countries it hopes to blanket.

    As well as spectrum, a permanent commercial service is going to need a lot of government-level negotiation to secure overflight permissions and test sites, he said.

    As well as Africa, India and Sri Lanka

    Reply
  40. Tomi Engdahl says:

    Researchers Demonstrate Single-Ended Die-to-Die Transceiver
    http://www.eetimes.com/document.asp?doc_id=1327985&

    Researchers at the University of Toronto’s Integrated Systems Laboratory have have created a 20 Gb/s single-ended die-to-die transceiver to address some of the challenges presented by the technology likely to replace double data rate (DDR) memory.

    “It’s clear the end of DDR is in sight, especially for high performance computing,” said Anthony Chan Carusone, a professor of electrical and computer engineering at U of T. The best alternatives on the horizon – memory stacked on top of a processor or stacked memory next to the processor – both present a series of challenges when balancing density, low latency and heat dissipation. If a memory cube is placed next to the processor, the heat dissipation issues are addressed, he said, but there needs to be an interface. “That’s really where the research comes in.”

    Chan Carusone and PhD student Behzad Dehlaghi proposed a link architecture, a transceiver circuit design and a package solution to create this new die-to-die link.

    A 20 Gb/s 0.3 pJ/b Single-Ended Die-to-Die Transceiver in 28 nm-SOI CMOS. The interface connects a CMOS memory controller die at bottom of DRAM stack and the processor and achieves high density at a high data rate per pin,” said Chan Carusone.

    Ultimately, the researchers were able to demonstrate that he proposed transceiver can be used on both organic substrates and silicon interposers and consume 6.1 mW power at 20 Gb/s over a 2.5 mm interconnect with 10.7 dB loss at Nyquist frequency and 4.8 dB loss at DC. The energy efficiency of the proposed transceiver is mainly due to the use of CMOS building blocks, minimizing current in the transmitter to receiver signal path, and lowering the signal swings on the channel.

    Reply
  41. Tomi Engdahl says:

    Wireless controls in building systems
    http://www.csemag.com/single-article/wireless-controls-in-building-systems/0bf4f5045d736e6e7b97dffde41fb005.html?OCVALIDATE=

    There is quite a bit of flux in the wireless building instrumentation and controls protocol market with numerous players jockeying for dominance. Though wireless systems are becoming the norm for product manufacturers in many cases, not all engineers fully understand the best way to specify these systems.

    Wired building automation systems (BAS) have been successfully installed in both new construction and renovation projects for many years, and are the standard way a BAS is installed. Recently, wireless technology has become more prominent in the marketplace. This trend will continue, as a significant portion of the construction market consists of renovation work, and these projects pose difficulties for wired networks. As more flexibility is required in the built environment, wireless technologies can be easily reconfigured to support new building layouts.

    Most wireless installations today are classified as hybrid systems because they are a combination of wired and wireless systems.

    Tier 1 is the primary bus, sometimes referred to as the management level, that is typically a wired solution and generally uses BACnet/IP protocol. This level is where operator interface with the system typically occurs, and devices such as operator workstations, Web servers, and other supervisory devices are networked together.

    Tier 2 is the secondary bus and is commonly referred to as the automation level. It is also typically a wired solution and generally uses BACnet/IP, BACnet/MSTP, or LonTalk protocols. This level generally connects field controllers, programmable logic controllers, application-specific controllers, and major mechanical, electrical, plumbing, and lighting equipment.

    Tier 3, or field level, is where end-user devices like thermostats and other sensors reside. While wireless networks can be used at all levels, this level is the most common implementation for wireless technologies due to their ease of installation, flexibility, and ease of relocation.

    The wireless technology that is most widely used is WiFi (IEEE 802.11 a/b/g/n).

    ZigBee is a widely accepted wireless protocol used in the building automation industry. This level of acceptance typically keeps development and deployment costs lower than other protocols. Zigbee is based on the IEEE 802.15.4 standard. The standard specifies a maximum data rate of 250 kB/sec across a self-forming meshed network. Since Zigbee uses low data rates, the corresponding power consumption is also low and results in a long battery life. ZigBee operates in the same frequency bands as WiFi but can co-exist with WiFi networks when properly configured.
    Most battery-operated wireless sensor devices have a battery life of up to 5 yr, though this might vary widely depending on use in the building.

    EnOcean devices use self-powered sensors that transmit data at a rate of 120 kB/sec in a mesh network at a low-frequency range. EnOcean devices utilize very low data and frequency rates.

    Wireless systems are required to have the same level of reliability as wired systems.

    With proper planning and implementation wireless networks can be as reliable and secure as a wired network. Using wireless mesh networks with “smart” routing techniques, challenges like ensuring data packets successfully reach their destination can be mitigated.

    The building’s construction characteristics must be considered during the design and layout of a wireless network.

    Reply
  42. Tomi Engdahl says:

    802.11ac WiFi Router Round-Up Tests Broadcom XStream Platform Performance
    http://tech.slashdot.org/story/15/10/14/0036235/80211ac-wifi-router-round-up-tests-broadcom-xstream-platform-performance

    Wireless routers are going through somewhat of a renaissance right now, thanks to the arrival of the 802.11ac standard that is “three times as fast as wireless-N” and the proliferation of Internet-connected devices in our homes and pockets. AC is backward compatible with all previous standards, and whereas 802.11n was only able to pump out 450Mb/s of total bandwidth, 802.11ac is capable of transmitting at up to 1,300Mbps on a 5GHz channel. AC capability is only available on the 5GHz channel

    802.11ac Wi-Fi Router Round-Up: ASUS, Netgear, D-Link, and TRENDnet
    Read more at http://hothardware.com/reviews/80211ac-wi-fi-router-round-up-asus-netgear-d-link-and-trendnet#0x7ziFxevL3otlaK.99

    Reply
  43. Tomi Engdahl says:

    Ask Slashdot: Is There Space For Open Hardware In Networking?
    http://ask.slashdot.org/story/15/10/13/191202/ask-slashdot-is-there-space-for-open-hardware-in-networking

    Open hardware has got much attention with the advent of Raspberry Pi, Arduino and their respective clones. But most of the devices are focused either on tinkerers (Arduino) or most notably multimedia (Raspberry Pi).

    Our company (non-profit) is trying to change this with Turris Omnia but we still wander if there is in fact demand for such devices. Is the market large enough and the area cool enough?

    Turris Omnia
    https://omnia.turris.cz/en/

    More than just a router.
    The open-source center of your home.

    Home router is necessary to connect you to the Internet but it is idle most of the time, just eating electricity. Why not use it for more tasks?
    With powerful hardware, Turris Omnia can handle gigabit traffic and still be able to do much more. You can use it as a home server, NAS, printserver and it even has a virtual server built-in.

    Reply
  44. Tomi Engdahl says:

    Internet Architecture Board defends users’ rights to mod Wi-Fi kit
    Net boffins to FCC: spread the love, don’t fear spectrum spread
    http://www.theregister.co.uk/2015/10/14/iab_defends_users_rights_to_mod_wifi_kit/

    The Internet Architecture Board (IAB) has gently suggested to the United States’ Federal Communications Commission (FCC) that locking WiFi kit to manufacturers’ firmware forever might not be a good idea.

    The IAB’s submission to the FCC, made last week, is in response to the FCC suggesting a crack-down on open-source firmware like OpenWrt.

    The FCC’s mooted ban-hammer is designed to keep devices like WiFi routers operating in their designated spectrum, with the regulator fearful that inept modders could grab something like emergency spectrum in their eternal search for a channel that isn’t contested by every other access point within reach.

    The IAB, which last year decided to make user privacy and security the focus of its efforts, is particularly concerned that a ban on non-vendor firmware will leave stranded users with orphan devices that no longer get manufacturer support.

    Reply
  45. Tomi Engdahl says:

    Ina Fried / Re/code:
    AT&T to launch NumberSync, a service that lets multiple devices share one phone number, removing friction from owning multiple devices with cellular — AT&T Readies Technology to Let Multiple Devices Share One Phone Number — AT&T is nearly ready to launch a feature that will allow …

    AT&T Readies Technology to Let Multiple Devices Share One Phone Number
    http://recode.net/2015/10/14/att-readies-technology-to-let-multiple-devices-share-one-phone-number/

    AT&T is nearly ready to launch a feature that will allow a customer’s smartphone, tablet and other devices to share a single phone number.

    The feature, now called NumberSync, is a key to making it more attractive for customers to own multiple devices with built-in wireless connections, just as it was important to offer shared data plans.

    As previously reported by Re/code, AT&T has been testing the underlying technology, code-named Cascade, since at least last year. It was developed in part at the company’s Foundry incubator in Palo Alto, Calif.

    “This is really a first in the industry that we are giving customers the ability to do this,” AT&T Mobility CEO Glenn Lurie told Re/code.

    Reply
  46. Tomi Engdahl says:

    Samsung to Join the Space Internet Race?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328014&

    Although Samsung denies interest, could the company be working on space internet, based on this research paper?

    Another space race is developing, this time by groups planning to beam zettabytes of data from mini-satellites to the two-thirds of humanity without Internet access.

    In the past few weeks, companies as diverse as Samsung and Facebook have outlined their vision of such a ‘space internet’.

    Several entrepreneurs with deep pockets have already laid out initiatives for global access, some deploying fairly traditional technologies, others promising more innovative—and possibly more expensive and risky—space and land-based solutions.

    The ever growing list includes well backed and high profile groups, such as SpaceX, fronted by Elon Musk (the business maverick already making his commercial mark with Tesla Motors and co-founder of PayPal), with backing from Google and numerous financial institutions; and serial entrepreneur Sir Richard Branson, founder of the Virgin group, with his London-based OneWeb consortium that also includes Qualcomm.

    SpaceX has already applied to the Federal Communications Commission to begin testing of its low earth orbit (LEO) based satellite system.

    Samsung’s intervention is an intriguing development, even though the company stresses it has no immediate plans to enter the fray.

    The extremely well-argued research paper published last month and penned by the conglomerate’s president of research in the Americas, Farooq Khan—perhaps only let down by the headline ‘Mobile Internet from the Heavens’—envisages 4,600 low cost, low earth orbiting (LEO) ‘micro satellites’ streaming 1 zettabyte of data each month, enough to provide 200Gbytes of data per month to 5bn people worldwide, the majority of whom have no access at the moment.

    Reply
  47. Tomi Engdahl says:

    IoT Spectrum on WRC-15 Agenda
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328000&

    Identifying new 700 MHz spectrum bands for the Internet of Things and machine-to-machine communications is one of the agenda items at the upcoming WRC-15 event.

    Every four years more than 190 U.N. member states gather in Geneva, Switzerland, to agree on matters related to the global use of the radio frequency spectrum.

    At WRC-15 agenda items 1.1, 1.2 and 1.3 are creating a lot of interest, as they are dealing with identification of new spectrum ranges for terrestrial mobile broadband such as LTE. There is a special focus on the spectrum range 698-869 MHz which has a global scope for various flavors of LTE.

    Some 48 European countries that are part of Region 1 have identified the 698-791 MHz range for not only commercial mobile broadband services, but also for broadband public safety and disaster relief (PPDR) as well as services for the Internet of Things and machine-to-machine communications.

    It is expected that LTE Release 13 or later will form the basic platform in the 700 MHz range for effective police and emergency operations going forward in Europe and the rest of the world.

    It is envisioned that IoT/M2M could share the 3×3 MHz spectrum pair, with uplink in 733-736 MHz and downlink in 788-791 MHz using the LTE infrastructure rolled out for PPDR.

    Meanwhile, work is underway in the 3GPP standards group to define a band class starting in 698 MHz for PPDR and other professional applications such as M2M. This work is expected to conclude by the end of 2015

    Reply
  48. Tomi Engdahl says:

    How do you create an SLA and status page for the whole internet? Meet IANA: Keepers of DNS
    Running the web without the US at the helm – and in Java
    http://www.theregister.co.uk/2015/10/15/iana_sla_for_the_internet/

    When control of the internet’s naming and numbering systems is handed over by the US government to domain system overseer ICANN, there will be one big change: it will be subject to a service level agreement drawn up by the internet community.

    ICANN’s IANA department runs the world’s DNS, IP address allocation, and other tasks, under contract for Uncle Sam. That contract is coming to an end, so if ICANN – a California non-profit – wants to run the behind-the-scenes of the internet on its own, it’s got to have an SLA. There are, after all, 3.2 billion of us relying on it.

    The metrics from that new agreement – stats that show what is happening at the internet’s highest levels – will be made available in a public dashboard for all to see, complete with graphs and traffic-light indicators.

    It will be a window into the internet’s most fundamental functions; a cardiogram of the global network’s beating heart.

    Reply
  49. Tomi Engdahl says:

    Chattanooga boosts citywide broadband capacity to 10 gigabits
    Ultra-high-speed broadband available to all 170,000 customers for $299 per month
    http://www.timesfreepress.com/news/local/story/2015/oct/15/chattanooga-becomes-first-10-gigabit-city-world/330691/

    Chattanooga’s EPB, which created America’s first “Gig city” five years ago with its citywide gigabit (1,000 Mbps) Internet service, is taking ultra-fast broadband connections to an even higher level with the addition of 10 Gig service throughout Chattanooga.

    The city-owned utility announced today that it is now offering 10 gigabit broadband to any of EPB’s 170,000 customers. The 10 Gig service will be offered to any residential customer for $299 per month, compared with the $69.99 for EPB’s current single gig service.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*