Telecom trends for 2015

In few years there’ll be close to 4bn smartphones on earth. Ericsson’s annual mobility report forecasts increasing mobile subscriptions and connections through 2020.(9.5B Smartphone Subs by 2020 and eight-fold traffic increase). Ericsson’s annual mobility report expects that by 2020 90% of the world’s population over six years old will have a phone.  It really talks about the connected world where everyone will have a connection one way or another.

What about the phone systems in use. Now majority of the world operates on GSM and HPSA (3G). Some countries are starting to have good 4G (LTE) coverage, but on average only 20% is covered by LTE. 4G/LTE small cells will grow at 2X the rate for 3G and surpass both 2G and 3G in 2016.

Ericsson expects that 85% of mobile subscriptions in the Asia Pacific, the Middle East, and Africa will be 3G or 4G by 2020. 75%-80% of North America and Western Europe are expected to be using LTE by 2020. China is by far the biggest smartphone market by current users in the world, and it is rapidly moving into high-speed 4G technology.

The sales of mobile broadband routers and mobile broadband “usb sticks” is expected to continue to drop. In year 2013 those devices were sold 87 million units, and in 2014 sales dropped again 24 per cent. Chinese Huawei is the market leader (45%), so it has most to loose on this.

Small cell backhaul market is expected to grow. ABI Research believes 2015 will now witness meaningful small cell deployments. Millimeter wave technology—thanks to its large bandwidth and NLOS capability—is the fastest growing technology. 4G/LTE small cell solutions will again drive most of the microwave, millimeter wave, and sub 6GHz backhaul growth in metropolitan, urban, and suburban areas. Sub 6GHz technology will capture the largest share of small cell backhaul “last mile” links.

Technology for full duplex operation at one radio frequency has been designed. The new practical circuit, known as a circulator, that lets a radio send and receive data simultaneously over the same frequency could supercharge wireless data transfer, has been designed. The new circuit design avoids magnets, and uses only conventional circuit components. The radio wave circulator utilized in wireless communications to double the bandwidth by enabling full-duplex operation, ie, devices can send and receive signals in the same frequency band simultaneously. Let’s wait to see if this technology turns to be practical.

Broadband connections are finally more popular than traditional wired telephone: In EU by the end of 2014, fixed broadband subscriptions will outnumber traditional circuit-switched fixed lines for the first time.

After six years in the dark, Europe’s telecoms providers see a light at the end of the tunnel. According to a new report commissioned by industry body ETNO, the sector should return to growth in 2016. The projected growth for 2016, however, is small – just 1 per cent.

With headwinds and tailwinds, how high will the cabling market fly? Cabling for enterprise local area networks (LANs) experienced growth of between 1 and 2 percent in 2013, while cabling for data centers grew 3.5 percent, according to BSRIA, for a total global growth of 2 percent. The structured cabling market is facing a turbulent time. Structured cabling in data centers continues to move toward the use of fiber. The number of smaller data centers that will use copper will decline.

Businesses will increasingly shift from buying IT products to purchasing infrastructure-as-a-service and software-as-a-service. Both trends will increase the need for processing and storage capacity in data centers. And we need also fast connections to those data centers. This will cause significant growth in WiFi traffic, which will  will mean more structured cabling used to wire access points. Convergence also will result in more cabling needed for Internet Protocol (IP) cameras, building management systems, access controls and other applications. This could mean decrease in the installing of special separate cabling for those applications.

The future of your data center network is a moving target, but one thing is certain: It will be faster. The four developments are in this field are: 40GBase-T, Category 8, 32G and 128G Fibre Channel, and 400GbE.

Ethernet will more and more move away from 10, 100, 1000 speed series as proposals for new speeds are increasingly pushing in. The move beyond gigabit Ethernet is gathering pace, with a cluster of vendors gathering around the IEEE standards effort to help bring 2.5 Gbps and 5 Gbps speeds to the ubiquitous Cat 5e cable. With the IEEE standardisation process under way, the MGBase-T alliance represents industry’s effort to accelerate 2.5 Gbps and 5 Gbps speeds to be taken into use for connections to fast WLAN access points. Intense attention is being paid to the development of 25 Gigabit Ethernet (25GbE) and next-generation Ethernet access networks. There is also development of 40GBase-T going on.

Cat 5e vs. Cat 6 vs. Cat 6A – which should you choose? Stop installing Cat 5e cable. “I recommend that you install Cat 6 at a minimum today”. The cable will last much longer and support higher speeds that Cat 5e just cannot support. Category 8 cabling is coming to data centers to support 40GBase-T.

Power over Ethernet plugfest planned to happen in 2015 for testing power over Ethernet products. The plugfest will be focused on IEEE 802.3af and 802.3at standards relevant to IP cameras, wireless access points, automation, and other applications. The Power over Ethernet plugfest will test participants’ devices to the respective IEEE 802.3 PoE specifications, which distinguishes IEEE 802.3-based devices from other non-standards-based PoE solutions.

Gartner expects that wired Ethernet will start to lose it’s position in office in 2015 or in few years after that because of transition to the use of the Internet mainly on smartphones and tablets. The change is significant, because it will break Ethernet long reign in the office. Consumer devices have already moved into wireless and now is the turn to the office. Many factors speak on behalf of the mobile office.  Research predicts that by 2018, 40 per cent of enterprises and organizations of various solid defines the WLAN devices by default. Current workstations, desktop phone, the projectors and the like, therefore, be transferred to wireless. Expect the wireless LAN equipment market to accelerate in 2015 as spending by service providers and education comes back, 802.11ac reaches critical mass, and Wave 2 products enter the market.

Scalable and Secure Device Management for Telecom, Network, SDN/NFV and IoT Devices will become standard feature. Whether you are building a high end router or deploying an IoT sensor network, a Device Management Framework including support for new standards such as NETCONF/YANG and Web Technologies such as Representational State Transfer (ReST) are fast becoming standard requirements. Next generation Device Management Frameworks can provide substantial advantages over legacy SNMP and proprietary frameworks.

 

U.S. regulators resumed consideration of mergers proposed by Comcast Corp. and AT&T Inc., suggesting a decision as early as March: Comcast’s $45.2 billion proposed purchase of Time Warner Cable Inc and AT&T’s proposed $48.5 billion acquisition of DirecTV.

There will be changes in the management of global DNS. U.S. is in the midst of handing over its oversight of ICANN to an international consortium in 2015. The National Telecommunications and Information Association, which oversees ICANN, assured people that the handover would not disrupt the Internet as the public has come to know it. Discussion is going on about what can replace the US government’s current role as IANA contract holder. IANA is the technical body that runs things like the global domain-name system and allocates blocks of IP addresses. Whoever controls it, controls the behind-the-scenes of the internet; today, that’s ICANN, under contract with the US government, but that agreement runs out in September 2015.

 

1,044 Comments

  1. Tomi Engdahl says:

    New Comcast innovation: A $30 charge to eliminate your data cap
    Florida data cap trial limits users to 300GB unless they pay extra each month.
    http://arstechnica.com/business/2015/09/comcast-now-charging-30-extra-per-month-for-unlimited-data-in-florida/

    Comcast has unveiled a new $30 charge that will let customers in Florida escape the company’s 300GB monthly usage limit.

    The nation’s largest cable company has been trialling data caps in nine states, with slightly different policies in each one. Generally, customers who exceed a monthly limit pay an extra $10 for each additional 50GB, though customers are allowed to exceed the caps for three months before getting penalized.

    But customers in Fort Lauderdale, the Keys, and Miami, Florida, can now purchase unlimited data for an extra $30 per month. Paying this additional $30 eliminates the 300GB monthly cap, but customers have to pay the extra amount each month even if they use less than 300GB.

    “The Unlimited Data Option costs the current additional fee of $30 per calendar month, regardless of actual data usage,” Comcast said in an FAQ updated today.

    Customers who use more than 450GB per month may come out ahead by purchasing the unlimited data option.

    The unlimited data option hasn’t been made available to the other eight states where Comcast is imposing usage limits. Those are Alabama, Arizona, Georgia, Kentucky, Maine, Mississippi, Tennessee, and South Carolina.

    Within the trial areas, customers who buy the pricey 505Mbps or 2Gbps plans don’t face data limits. Customers who live outside the data cap trial states don’t face any limits or overage charges regardless of what plan they buy, but Comcast may impose limits throughout its territory within a few years.

    Reply
  2. Tomi Engdahl says:

    America’s crackdown on open-source Wi-Fi router firmware – THE TRUTH
    El Reg looks at why and what the FCC wants to do. Plus: How you can get stuck in
    http://www.theregister.co.uk/2015/09/05/fcc_software_updates/

    America’s broadband watchdog is suffering a backlash over plans to control software updates to Wi-Fi routers, smartphones, and even laptops.

    In a proposed update [PDF] to the regulator’s rules over radiofrequency equipment, the Federal Communications Commission (FCC) would oblige manufacturers to “specify which parties will be authorized to make software changes.”

    In addition, it proposes that “modifications by third parties should not be permitted unless the third party receives its own certification.”

    While the intent is to make the FCC’s certification of the next generation of wireless equipment faster and more flexible, open source advocates were quick to notice that the rules would effectively force manufacturers to lock down their equipment and so remove the ability to modify software without formal approval from the US government. Such an approach goes directly against the open source ethos.

    As a result, many are unhappy about the plans.

    Earlier this week, however, the FCC approved a one-month extension to the deadline and an additional 15-day reply period after consumer groups and equipment manufacturers made it clear that they needed more time to look at what was being proposed.

    The current rules were put in place 15 years ago, long before the explosion of smart phones and laptops and widespread use of Wi-Fi.

    Every product approved gets its own FCC ID, which the manufacturer is then obliged to stick on the product itself (something that the FCC acknowledges is getting harder as the devices get smaller).

    In recent years however, this system has become impossible to manage effectively. Smaller and cheaper chipsets have led to huge numbers of new devices and a shift to the wireless world. Today’s phones, for example, can operate at several different radio frequency bands and include 3G, 4G, Wi-Fi, Bluetooth, GPS, and NFC (near-field communications).

    In order to handle the jump in requests, the FCC changed its rules to allow for some self-certifying by companies, and some third-party certification. But it now feels this approach is also outdated, thanks to the fact that the latest devices often allow changes to wireless frequencies through software updates, as opposed to hardware/firmware.

    The regulator notes that the shift to software updates has proven extremely useful, since “it allows manufacturers to obtain approval of products with an initially limited set of capabilities and then enable new frequency bands, functions, and transmission formats to be added to already-approved equipment.” However, it is concerned about things getting out of control, especially if it opens up its certification processes to allow more devices on the market.

    And the solution?

    And so, to the FCC’s mind, the answer is simple: put control requirements back onto the manufacturers themselves.

    In order to make sure that a new product doesn’t appear on the market that enables people to instantly use, for example, emergency police channels to communicate, require the manufacturers to only allow updates from authorized companies, i.e., those with something to lose from breaking the rules.

    At the same time, it also proposes that this update process be locked down so others can’t easily access and make their own changes to new devices. If companies do this, then the FCC argues it can open up its rules and “make it easier for manufacturers to implement software changes.”

    The logic is seductive but it leads to the situation where all devices with radio transmission capability – i.e., your phone, computer, home router and many others – need to be locked down. That won’t bother the vast majority of consumers, who simply buy a product and let the company do what it will with it (even when that is incredibly frustrating – we’re looking at you, Apple).

    In a sign that the plan may be going down the rabbit hole, it then proposes a “personal use” exemption where the rules would not apply to people entering the country with devices not approved within the United States.

    Under the current rules, there is a personal use exemption of three devices. But if the new rules came into effect, US border police may effectively be obliged to stop and search everyone entering the country to make sure they didn’t have more than three non-approved devices on them. That is clearly an unworkable situation.

    Reply
  3. Tomi Engdahl says:

    The new class LTE modules for host connections

    LTE standard is also defined in the terminal class 1, which provides up to 10 Mbps data connections to the network terminal. Swiss u-blox has introduced the first Class 1 modules for LTE. They are needed, for example, IoT links, cars and engines between M2M connections.

    The new module has the advantage of a clear class 4 modules for a more affordable price. New U-bloxin is intended for the North American market. Europe modules are still under construction.

    Arrivals Toby-R201 acts as LTE bands 2, 4, 13, 17, and HSPA in lanes 2 and 5. The module dimensions are 24.8 x 35.6 millimeters. Samples it gets during October.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3286:uuden-luokan-lte-moduuleja-koneyhteyksiin&catid=13&Itemid=101

    Reply
  4. Tomi Engdahl says:

    TCP is a wire-centric protocol being forced to cut the cord, painfully
    Interview: Juho Snellman of telco software shop Teclo talks TCP optimisation
    http://www.theregister.co.uk/2015/09/08/tcp_more_than_30_years_old_and_it_still_holds_traps_for_the_unwary/

    The venerable Transmission Control Protocol (TCP) is one of the foundation protocols of the Internet, but it’s not so hot at mobile environments, says Juho Snellman of Swiss telco software concern Teclo.

    In an interview with The Register, Snellman said the problem with TCP is simple: it was designed for fixed line environments, which are already in the minority in terms of client connections.

    Since you can’t dispense with the protocol, other fixes are needed – and that’s where Snellman’s interest arises.

    “Mobile networks are extremely unpredictable and volatile,” he explained. To optimise TCP for the mobile environment, both the server side and the air interface have to be considered: “designing this, we had to shield the server from the radio network, and do smarter things with the radio network as well,” he said.

    However, you can’t just throw a “middlebox” onto the network to cache the traffic and hope it works.

    “TCP is the only optimisation tool available for mobile”, he explained, since universal encryption makes caching and compression obsolete.

    “All you have left is Layer 3-4 optimisation,” he said.

    The biggest issue in trying to ship something into carrier networks is, naturally, performance. Teclo reckons it’s got that sorted, with software that can handle 20 Gbps of optimisation on 10 million connections on a 2U node running on standard hardware.

    “If you’re using the operating system TCP stack and you want a million or 10 million connections, it’s unpleasant”, he said.

    As Snellman’s presentation notes explain, the resulting architecture looks a bit like this:

    Teclo’s userspace NIC drivers for packet I/O, which map the PCI registers and physical memory for frame storage;
    The drivers also manipulate the NIC’s transmit and receive descriptor rings.

    This only needs around 1,000 lines of code, even including zero-copy “even for packets that we buffer for arbitrary amounts of time”.

    Keeping things transparent turned up another oddity, he said: the software “needs to be transparent to the TCP TTL fields, because some network nodes that use these to detect tethering.”

    “Any time you’re not transparent to some property, it will bite you.”

    One particularly bad result he cited was 30 segments in a mere 50 ms, and as the presentation notes, “reordering is poison for TCP”.

    “That’s very hard to distinguish from packet loss,” he explained. “If a packet gets ahead of 30 other packets in the queue, the host thinks the connection has lost everything and needs it to be resent”.

    The answer is to develop heuristics to detect re-ordering, he said.

    Reply
  5. Tomi Engdahl says:

    We asked a maker of PCIe storage switches to prove the tech is more interesting than soggy cardboard
    Why not just use 10GE?
    http://www.theregister.co.uk/2015/09/07/pmc_switchtec_interview/

    El Reg What is the background to the PCIe switching products?

    Ray Jang The PCIe standard was not originally intended for the relatively harsh requirements of enterprise storage, server, and data center equipment. For example, in the data center environment, unexpected surprise plugging and unplugging of cards, drives, and other peripherals without ever crashing the CPU and/or system is a key operational requirement. Standard PCIe switches aren’t able to handle every-day occurrences like these, and that’s been a barrier to PCIe adoption in enterprise systems.

    Significant technology innovations were needed to match the level of robustness and cost-effective scalability that traditional interconnect technologies like SAS provide. PMC combined the knowledge gained from our SAS connectivity products, advanced SERDES capabilities, and PCIe switching IP acquired from IDT to bring a family of PCIe storage switches to market. They enable PCIe-SSD-based systems to scale, with the resiliency, programmability, and advanced diagnostics needed for mass deployment.

    Reply
  6. Tomi Engdahl says:

    Philippines to Roll Out Nationwide Free Wi-Fi Service by 2016
    http://www.bloomberg.com/news/articles/2015-09-07/philippines-to-roll-out-nationwide-free-wi-fi-service-by-2016

    The Philippines is planning free Wi-Fi services to half of its towns and cities this year and nationwide coverage by end-2016, limiting the data revenue prospects for Philippine Long Distance Telephone Co. and Globe Telecom Inc.

    The free Internet service will cost the government about 1.5 billion pesos ($32 million) a year and will be available in areas such as public schools, hospitals, airports and parks, said Monchito Ibrahim, deputy executive director of the Information and Communications Technology Office.

    “If subscribers move to using free public Wi-Fi, telecoms may need to lure them into getting higher-end services,” Ibrahim said in a Sept. 4 interview in Makati City, referring to the country’s two main phone companies. The government’s “focus is on areas that absolutely don’t have access.”

    The new service is expected to push data charges lower in the Philippines. Access to the Internet costs about $18 a megabit per second in the country, more than three times the global average of $5, according to research firm International Data Corp. or IDC.

    “The free Wi-Fi service would compel improvement of service of both telecoms,”

    The government’s free Wi-Fi service has its limitations. Speed is capped at 256 kilobits per second, enough for basic Internet searches or access to Facebook, Ibrahim said.

    By contrast, Singapore started a free wireless service in 2006 that now offers speeds of as much as 2 megabits per second — eight times faster than the one planned in the Philippines. That’s enough for phone calls on the data network or video streaming, with the access offered at public places such as the airport, malls, hospitals and schools.

    Reply
  7. Tomi Engdahl says:

    Cell-network content crunch needs new cache designs, say boffins
    Sysadmins, get ready to manage servers on ANTENNA POLES
    http://www.theregister.co.uk/2015/09/09/cellnetwork_content_crunch_needs_new_cache_designs_say_boffins/

    It’s increasingly clear to the telecommunications industry that content distribution is going to need to push further out into networks, to try and relieve congestion on the moderately-constrained backhaul that connects cell towers.

    Content caching at base stations can help, but as an IEEE Fellow from Hong Kong University of Technology Xi Ping points out in a new paper, the caching strategies that have served us well in fixed networks (cache the most popular content in the ISP network, for example) are challenged by cellular network backhaul and unreliable radio links.

    It’s also worth pointing out that optimising the base station cache strategy is also important since with tens of thousands of base stations in service, the wrong deployment would be a very expensive mistake.

    The basis of Ping’s proposal is that in the mobile network, cache design needs to take in user delay (averaged across many users), the caching constraints at each base station, and the propagation delay on base station backhaul links, and the interplay of content popularity and the backhaul network.

    If you ignore the complexity of the mathematics involved, one either-or decision behind the strategy is pretty simple:

    If backhaul delay is small, the cache should prioritise the most popular content;
    Where there’s a large backhaul delay, the design concentrates on content diversity, to minimise the content traversing the upstream link.

    The optimal caching strategy demands “knowledge of channel statistics and file popularity”, so that their proposed central controller can make sure local caches are filled with the right stuff.

    There’s another problem which, they note, is specific to mobile networks: the relative unreliability of the wireless link between base station and user.

    Backhaul-Aware Caching Placement for Wireless Networks
    http://arxiv.org/abs/1509.00558

    As the capacity demand of mobile applications keeps increasing, the backhaul network is becoming a bottleneck to support high quality of experience (QoE) in next-generation wireless networks. Content caching at base stations (BSs) is a promising approach to alleviate the backhaul burden and reduce user-perceived latency. In this paper, we consider a wireless caching network where all the BSs are connected to a central controller via backhaul links. In such a network, users can obtain the required data from candidate BSs if the data are pre-cached. Otherwise, the user data need to be first retrieved from the central controller to local BSs, which introduces extra delay over the backhaul. In order to reduce the download delay, the caching placement strategy needs to be optimized.

    Reply
  8. Tomi Engdahl says:

    ICANN has $60m burning a hole in its pocket – and it needs your help blowing it all
    Brewster’s Millions meets the world of internet plumbing
    http://www.theregister.co.uk/2015/09/09/how_should_icann_spend_60m/

    Domain-name overseer ICANN wants your suggestions for how it should spend the $60m it made from auctioning off new dot-words.

    In a discussion paper [PDF] published today, the wannabe-master-of-the-internet notes that it has $58.8m in a special bank account. Just under half of it stemmed from Google, which paid $25m for the right to sell domains ending in .app.

    http://newgtlds.icann.org/en/applicants/auctions/proceeds/discussion-paper-07sep15-en.pdf

    Reply
  9. Tomi Engdahl says:

    Net neutrality: How to spot an arts graduate in a tech debate
    Sorry lawyers, but the Packet Pixie doesn’t really exist
    http://www.theregister.co.uk/2015/08/25/so_how_do_you_spot_an_arts_graduate_in_tech/

    rts and humanities graduates are schooled for years in metaphor and analogy – and these are very useful skills for understanding the world. But what happens when an approach based on metaphor and analogy meets hard science and engineering reality? And what happens when the chosen metaphor doesn’t fit?

    While you can choose your own identity, you can’t ultimately change the reality of how network packets are delivered – and customers are being sold short. Customers should be demanding higher quality of service from networks, rather than clamouring for “neutral” networks, which in reality don’t, can’t and will never exist.

    It’s easy enough to spot when Stephen Fry offers a ridiculous explanation of a technical subject on QI (as he did here and here and here – just a few examples among many). Unfortunately however, sometimes a foolish soundbite metaphor grows legs and turns into a movement. The “net neutrality” cause is a vivid example. It’s the creation of lawyers, policy wonks, professional activists and journalists, most of whom received impeccable humanities educations, and who probably mean well. But they’re all using metaphorical logic, when boolean logic is what’s needed.

    Reply
  10. Tomi Engdahl says:

    Broadband powered by home gateways? Whose bright idea was THIS?
    Broadband Forum eating the fruit of the idiot tree
    http://www.theregister.co.uk/2015/09/08/fttn_fibre_powered_by_home_gateways_whose_bright_idea_was_this/

    Fibre-to-the-node can help squeeze the last drop of sweat out of copper telephony networks, but it has a problem: nodes need electrons, and there might not be a copper path upstream to the exchange for 48V power. So the standards body The Broadband Forum thinks powering nodes using household electricity is a good idea.

    The idea is put forward in a work of staggering genius called TR-301, promoted with the breathless blather of a Broadband Forum press release here (The Register will concede that calling its idea “radical” is accurate).

    Clearly, the forum’s carrier/operator and vendor members fear that the need to power FTTN/fibre-to-the-distribution point (FTTDP) nodes might put them at a disadvantage compared to all-passive fibre rollouts.

    With the exception of RIM-style architectures, today’s copper network has no need for active equipment between the exchange (or central office, in US terminology) and the customer. The only electricity came from the carrier end – the 48V DC feed that powered old-style phones – and network topology was designed without worrying about the need for power.

    Active nodes, however, need electricity, and the closer to the customer the nodes are installed, the greater their number – and the more a carrier is going to have to negotiate with electricity utilities to get a grid connection.

    Hence the notion of getting power fed from a customer side, since (someone has clearly reasoned) if the nodes only need a few hundred watts, who’s going to care?

    A blackout in one premises could knock out services for everyone connected to the node and the carrier is getting a free ride from whoever’s powering the node.

    Reply
  11. Tomi Engdahl says:

    Verizon: we’re going to start bringing you 5G NEXT YEAR (sort of)
    Telco mum on public launch, but says field trials will start in 2016
    http://www.theregister.co.uk/2015/09/08/verizon_to_test_5g_network_next_year/

    Verizon is planning to test its 5G wireless broadband network next year.

    The US telecom giant said it would be working with a group including Samsung, Ericsson, Nokia, Cisco, Qualcomm, and Alacatel-Lucent to launch a 2016 trial for technology that could power the network.

    Verizon said it plans to have a pair of “sandbox” test networks housed in its offices in San Francisco and Massachusetts. Other companies can then test while developing hardware and software for use with 5G networks.

    Little was given in the way of details on the network, and Verizon said nothing about whether the early trials could also mean its 5G network goes live for customers before the 2020 target date set by the ITU.

    A Verizon spokesperson told The Register that the expected download speeds for the 5G network will be 250–600 Mbps, or 50 times that of the current 5–12 Mbps 4G network.

    The company was also mum on spectrum use, though Verizon partner Ericsson has previously demonstrated a 5G network operating in the 15Ghz spectrum range.

    Reply
  12. Tomi Engdahl says:

    North America significantly ahead of 4G networks

    At the end of the second quarter in the world for nearly 755 million LTE mobile phone users. According to Ovum research institute in North America is clearly at the forefront of the 4G revolution, there will be an LTE phone to access almost every second, or 47.5 per cent of mobile phone users.

    Rest of the world is actually quite a bit behind the Yankee market. Europe LTE phones can be found in 19 percent of users and bundled percent in Asia is just over 16. Worldwide, the 4G mobile phone is now ten per cent of the population

    North America has 68 commercial LTE network. LTE-Advanced achieved is a shift in seven online. US and Canada there are 198 million LTE users, of which one-third has come in the past year.

    A total of LTE networks in the world is already 425 pieces of 145 different countries. Its evolutionary version of LTE-Advanced has been introduced in 88 operator networks in 45 countries.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3298:pohjois-amerikka-selvasti-edella-4g-verkoissa&catid=13&Itemid=101

    Reply
  13. Tomi Engdahl says:

    Nokia has introduced new services to help the operator can more easily and more preferably, the use of small cells. Meanwhile, the company says that the Flexi Zone small base station has been achieved over the gigabit data rate for LTE-Advanced network.

    The base rate of improvement based on a modular radio platform. As a result, the micro or pico cell may take advantage of three of LTE frequency band, and a so-called license free LTE-U-frequencies and even a Wi-Fi links.

    A big obstacle to the introduction of small cells is the cost of their implementation. According to Nokia, up to 90 percent of the cost of small cell resulting from the introduction and operation.

    Nokia points out that renting small cell mast can cost up to a thousand dollars a month. Buying a social networking site can be paid by the operator of up to $ 30 000.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3300:nokia-kiihdytti-piensolun-yhteen-gigabittiin&catid=13&Itemid=101

    Reply
  14. Tomi Engdahl says:

    The downsides of tech complexity
    http://www.edn.com/electronics-blogs/brians-brain/4440280/The-downsides-of-tech-complexity?_mc=NL_EDN_EDT_EDN_consumerelectronics_20150909&cid=NL_EDN_EDT_EDN_consumerelectronics_20150909&elq=bbafbd82010148b5a56da12276597399&elqCampaignId=24689&elqaid=27974&elqat=1&elqTrackId=184544e293904d7aa97c42e32b308cab

    An ongoing theme of many of my writeups involves the downsides of complexity; by making technology too difficult for consumers to understand and implement, you’re limiting your business opportunity both in the short term (due to customer returns) and long term (by shortchanging market adoption).

    Eventually (and time-inefficiently, since I was debugging from afar), after fruitlessly searching for interference sources in the form of neighbors’ access points, changing Wi-Fi broadcast channels, and doing other basic troubleshooting steps

    Although this situation might be understandable to a hard-core techie, I hope you can comprehend how bewildering it might be to a typical consumer.

    My solution might have been in the spirit of “If you only have a hammer, you tend to see every problem as a nail,” but it was effective. It didn’t have to be this way, however. Simply telling a consumer that the gateway broadcasts two wireless signals, one potentially faster than the other but also more limited in its range, would have gone a long way. And the gateway’s funky dual-subnet behavior should get tossed, too.

    Reply
  15. Tomi Engdahl says:

    IoT Nets Snags Wide-Area Player
    On-Ramp launches U.S. net, new name
    http://www.eetimes.com/document.asp?doc_id=1327641&

    The horse race to build a low-power, wide-area network for the Internet of Things just got a new contender. On-Ramp Wireless (San Diego) changed its name to Ingenu Networks and announced it is raising funds to roll out a public IoT network in the U.S. by the end of 2017.

    Ingenue’s 2.4 GHz technology will compete with as mnay as a half dozen separate 900 MHz offerings from groups including Sigfox, NWave and the LoRa Alliance founded by Semtech. They are all racing to beat versions of cellular and Wi-Fi networks tailored for IoT that are expected to hit the market in about two years.

    The Ingenu launch is “one of the biggest news stories in IoT networks this year, and it will help them build scale,” said Aapo Markkanen, an analyst for Machina Research following the area

    Over time, Markkanen expects the area of low-power, wide-area networks will fragment into tiers of offerings at high, medium and low data rates. He currently puts the new Ingenu effort in a new band supporting about 10 Kbits/second, below a medium-tier LoRa but above a low-tier Sigfox and NWave.

    “When LTE-M is ready then that will complicate things further,” he said, referring to a part of the LTE-Advanced, Release 13 offering expected to emerge in about two years.

    Phoenix and Dallas will be the first cities connected on a public offering Ingenu calls the Machine Network. The company claims it already covers 50,000 square miles in the Midwest and northwestern Texas

    Ingenu’s Random Phase Multiple Access (RPMA) technology is a 39 dB variant of a Direct Sequence Spread Spectrum network. It uses antenna diversity to offset the wider reach of its 900 MHz competitors.

    The company claims RPMA delivers the highest link budget of all its competitors. A shoebox-sized access point can cover up to 500 square miles and can handle hundreds of thousands of end points, it said in a white paper. However one user, the city of Anaheim, Calif., operates three access points to cover 50 square miles.

    RPMA trades off more complex nodes for a less complex network with a relatively high link budget. It cannot support video for remote surveillance, but it can support mobile connections up to 2 Kbits/second at 90 miles/hour.

    End-node modules cost about $10, well below current cellular M2M modules but higher than some competitors. The implementation requires an ASIC in the node and the access point which handle network time and frequency synchronization among other jobs.

    By contrast, Sigfox uses off-the-shelf parts and LoRa requires one device sourced from Semtech.

    Ingenu’s white paper includes extensive comparisons to Sigfox and LoRa, signaling they are seen as its closest competitors. It claims it can deploy a network with one node per square mile for $180 per node, compared to $14,600 and $7,000 for networks based on Sigfox and LoRa technology, respectively.

    The idea for the national public network is the brainchild of John Horn who took over as chief executive at On-Ramp three months ago. Horn previously worked on M2M networks for TMobile and one of its distributors.

    Ingenu’s initiative comes at a time when Sigfox hopes to set up 1,300 base stations to cover 10 U.S. cities by the end of the year and as many as 4,000 base stations covering 30 cities by the end of 2016. Meanwhile LoRa is in talks to partner with a carrier to create a regional network and recently hired an IoT expert from outside Semtech to lead its alliance. Ingenu lacks the big company backing of the LoRa Alliance which includes giants such as Cisco, IBM, Microchip and SK Telecom.

    The window for such newcomers may only be open for a couple years. Cellular proponents are already starting to ship a modified version of LTE called Category 1 geared for lower power, lower bandwidth IoT networks. However chips for more compelling variants such as Category 0 and Cat-M are not expected until 2016 and 2017 respectively.

    Separately, the 3GPP cellular standards group is considering separate proposals from Huawei and others for a variant of GSM for IoT. The so-called Clean Slate or Cellular IoT proposal from Huawei and the startup Neul it acquired talks about subdividing 200 KHz channels but sports lower capacity than RPMA and may not be ready until 2018, the Ingenu white paper said.

    Reply
  16. Tomi Engdahl says:

    Hackers Abuse Satellite Internet Links To Remain Anonymous
    http://it.slashdot.org/story/15/09/09/141235/hackers-abuse-satellite-internet-links-to-remain-anonymous

    Poorly secured satellite-based Internet links are being abused by nation-state hackers, most notably by the Turla APT group, to hide command-and-control operations, researchers at Kaspersky Lab said today. Active for close to a decade, Turla’s activities were exposed last year

    Turla APT Group Abusing Satellite Internet Links
    https://threatpost.com/turla-apt-group-abusing-satellite-internet-links/114586/

    Poorly secured satellite-based Internet links are being abused by nation-state hackers, most notably by the Turla APT group, to hide command-and-control operations, researchers at Kaspersky Lab said today.

    Active for close to a decade, Turla’s activities were exposed last year; the Russian-speaking gang has carried out espionage campaigns against more than 500 victims in 45 countries, most of those victims in critical areas such as government agencies, diplomatic and military targets, and others.

    Its use of hijacked downstream-only links is a cheap ($1,000 a year to maintain) and simple means of moving malware and communicating with compromised machines, Kaspersky researchers wrote in a report. Those connections, albeit slow, are a beacon for hackers because links are not encrypted and ripe for abuse.

    “Once an IP address that is routed through the satellite’s downstream link is identified, the attackers start listening for packets coming from the internet to this specific IP,” the researchers wrote. “When such a packet is identified, for instance a TCP/IP SYN packet, they identify the source and spoof a reply packet (e.g. SYN ACK) back to the source using a conventional Internet line.”

    Abuse of satellite links is not solely the domain of Turla. HackingTeam command and control servers, for example, were found to be using such links to mask operations, as were links traced to Rocket Kitten and Xumuxu, two APT groups that are government-backed or have governments as customers, Kaspersky said.

    Kaspersky speculates that APT groups turn to satellite-based Internet links for C&C for a number of reasons, including as a countermeasure against botnet takedowns by law enforcement and ISPs, which open an avenue for researchers to determine who is behind an operation. Using these satellite links, however, is not without its risks to the attacker.

    “On the one hand, it’s valuable because the true location and hardware of the C&C server cannot be easily determined or physically seized. Satellite-based Internet receivers can be located anywhere within the area covered by a satellite, and this is generally quite large,” the researchers wrote. “The method used by the Turla group to hijack the downstream links is highly anonymous and does not require a valid satellite Internet subscription. On the other hand, the disadvantage comes from the fact that satellite-based Internet is slow and can be unstable.”

    Satellite Turla: APT Command and Control in the Sky
    How the Turla operators hijack satellite Internet links
    https://securelist.com/blog/research/72081/satellite-turla-apt-command-and-control-in-the-sky/

    Although relatively rare, since 2007 several elite APT groups have been using — and abusing — satellite links to manage their operations — most often, their C&C infrastructure. Turla is one of them. Using this approach offers some advantages, such as making it hard to identify the operators behind the attack, but it also poses some risks to the attackers.

    Real satellite links, MitM attacks or BGP hijacking?

    Purchasing satellite-based Internet links is one of the options APT groups can choose to secure their C&C traffic. However, full duplex satellite links can be very expensive: a simple duplex 1Mbit up/down satellite link may cost up to $7000 per week. For longer term contracts this cost may decrease considerably, but the bandwidth still remains very expensive.

    Another way of getting a C&C server into a satellite’s IP range is to hijack the network traffic between the victim and the satellite operator and to inject packets along the way. This requires either exploitation of the satellite provider itself, or of another ISP on the way.

    These kinds of hijacking attacks have been observed in the past and were documented by Renesys (now part of Dyn) in a blogpost dated November 2013.

    The hijacking of satellite DVB-S links has been described a few times in the past and a presentation on hijacking satellite DVB links was delivered at BlackHat 2010 by the S21Sec researcher Leonardo Nve Egea.

    While the dish and the LNB are more-or-less standard, the card is perhaps the most important component. Currently, the best DVB-S cards are made by a company called TBS Technologies. The TBS-6922SE is perhaps the best entry-level card for the task.

    The TBS card is particularly well-suited to this task because it has dedicated Linux kernel drivers and supports a function known as a brute-force scan which allows wide-frequency ranges to be tested for interesting signals.

    Unlike full duplex satellite-based Internet, the downstream-only Internet links are used to accelerate Internet downloads and are very cheap and easy to deploy. They are also inherently insecure and use no encryption to obfuscate the traffic. This creates the possibility for abuse.

    Companies that provide downstream-only Internet access use teleport points to beam the traffic up to the satellite. The satellite broadcasts the traffic to larger areas on the ground, in the Ku band (12-18Ghz) by routing certain IP classes through the teleport points.

    To attack satellite-based Internet connections, both the legitimate users of these links as well as the attackers’ own satellite dishes point to the specific satellite that is broadcasting the traffic. The attackers abuse the fact that the packets are unencrypted. Once an IP address that is routed through the satellite’s downstream link is identified, the attackers start listening for packets coming from the Internet to this specific IP. When such a packet is identified, for instance a TCP/IP SYN packet, they identify the source and spoof a reply packet (e.g. SYN ACK) back to the source using a conventional Internet line.

    At the same time, the legitimate user of the link just ignores the packet as it goes to an otherwise unopened port, for instance, port 80 or 10080.

    During the analysis, we observed the Turla attackers abusing several satellite DVB-S Internet providers, most of them offering downstream-only connections in the Middle East and Africa

    Reply
  17. Tomi Engdahl says:

    Nokia to launch 5G-radio network tests next year

    According to Nokia, its aim is to launch a 5G broadband connection to the radio network Preliminary tests next year.

    The radio network provides a connection between the user and the fiber-optic network. At best, 10 Gigabit transfer rate that reaches from the base station can provide one-gigabit transmission link the home user.

    In Finland, the fastest home connections are currently typically up to 100 megabits per second.

    All the major network manufacturers develop frantically 5G broadband technology.

    In the past, as well as Nokia and Huawei have announced online tests for the 2018 Winter Olympics and the Football World Cup.

    Source: http://www.tivi.fi/Kaikki_uutiset/nokia-kaynnistaa-5g-radioverkon-testit-ensi-vuonna-3482692

    Reply
  18. Tomi Engdahl says:

    IoT Nets Snag Wide-Area Player
    On-Ramp launches U.S. net, new name
    http://www.eetimes.com/document.asp?doc_id=1327641&

    The horse race to build a low-power, wide-area network for the Internet of Things just got a new contender. On-Ramp Wireless (San Diego) changed its name to Ingenu Networks and announced it is raising funds to roll out a public IoT network in the U.S. by the end of 2017.

    Ingenue’s 2.4 GHz technology will compete with as mnay as a half dozen separate 900 MHz offerings from groups including Sigfox, NWave and the LoRa Alliance founded by Semtech. They are all racing to beat versions of cellular and Wi-Fi networks tailored for IoT that are expected to hit the market in about two years.

    “When LTE-M is ready then that will complicate things further,” he said, referring to a part of the LTE-Advanced, Release 13 offering expected to emerge in about two years.

    Reply
  19. Tomi Engdahl says:

    Plug In an Ethernet Cable, Take Your Datacenter Offline
    http://hardware.slashdot.org/story/15/09/09/2117211/plug-in-an-ethernet-cable-take-your-datacenter-offline

    The Next Web reports on a hilarious design failure built into Cisco’s 3650 and 3850 Series switches, which TNW terms “A Network Engineer’s Worst Nightmare”. By plugging in a hooded Ethernet cable, you…well, you’ll just have to see the picture and laugh.

    This hilarious Cisco fail is a network engineer’s worst nightmare
    http://thenextweb.com/insider/2015/09/07/this-hilarious-cisco-fail-is-a-network-engineers-worst-nightmare/

    In 2013, Cisco issued a ‘field notice’ warning of a problem with its very expensive 3650 and 3850 Series Switches, used in many datacenters around the world.

    That field notice detailed a major problem with the switches, discovered after they were released: plugging in a cable could wipe them entirely in just a few seconds.

    The cables, which are sometimes accidentally used in datacenters, feature a protective boot that sticks out over the top

    That boot would hit the reset button which happened to be positioned directly above port one of the Cisco switch, which causes the device to quietly reset to factory settings.

    Such a situation could cause a problem in any size datacenter, where these switches and cables are commonly used. If someone plugged in a cable to port one unknowingly pushing the button, they’d possibly be taking down the entire network without even realizing it. If your switches are configured right, however, the blip should be only brief.

    It’s amazing that Cisco didn’t catch this before the device was released, let alone that the ‘fix’ for the problem which suggests using a different cable or cutting off the boot.

    Reply
  20. Tomi Engdahl says:

    Fiber optics: a backbone for advanced building design
    http://www.csemag.com/single-article/fiber-optics-a-backbone-for-advanced-building-design/e4747601e3d082add71432653a36d1ea.html

    Fiber-optic cables are an integral part of a building communication system. Although they are commonly installed for the enterprise network communication, they are also designed into building-management systems and electrical-power coordination.

    n the language of the information and communications technology (ICT) professional, fiber-optic cabling is the backbone medium for transporting data across campuses and through the spine of buildings. This in itself is not new. Fiber-optic cabling has played an integral part in network construction since the 1980s. Every year we continue to see advancements in technology, which in turn create more data and therefore highlight the role fiber optics plays for data transport. As a corollary to Moore’s Law, which describes the doubling of transistors per integrated circuit every 2 yr, there is a corresponding increase in the amount of data generated that then is transported.

    This brings us to the 2010s. Streaming video on the network is commonplace, and the Internet is so pervasive that it is not evident if data is being sourced from the enterprise network or the “cloud.” Access to office networks is expected to be ubiquitous in every building. Not having access can leave one feeling detached.

    Through this evolution of computing and networking, there has been a parallel evolution in building design. It started with the automation of manufacturing and business processes; in other words, it started with the internal operations for which the buildings provided shelter. We are now witnessing the start of fully integrated and automated buildings. As we design advanced intelligent buildings, they are generating their own data to add to the network load.

    The data generated by the building infrastructure is also related to the next emergence of data to be transported referred to as the Internet of Things

    Optical fiber versus copper

    Copper cable technology has worked hard to keep pace with the continuing increases in network bandwidth requirements. The current high-speed copper cabling standard in the U.S. is a Category 6A unshielded twisted-pair cable. It is rated at a frequency of 500 MHz and bandwidth that supports 10-GB/sec Ethernet for a standard cable length not exceeding about 100 m between network devices. This is more than adequate for most office desktop workstations and even backbone cabling in small buildings where there are one or two communications rooms not exceeding the 100-m cable length.

    However, the predominant copper cabling being installed today is a Category 6 cable rated at 250-MHz frequency, which supports 1-GB/sec Ethernet. The bandwidth requirements at the desktop do not yet exist where we have passed the tipping point of requiring this higher bandwidth cable. For a typical hierarchal LAN design, copper cable to the office desktop is less expensive than a fiber-optic cable. Just as hard as the copper industry has worked to increase the bandwidth (i.e., longevity) of their product, the fiber optics industry is working to bring down the cost of their products.

    There are a number of limitations for copper cabling, one being the 100-m channel distance. This ideally serves as the connection point between the communication-room network equipment and the desktop workstation (or another network-connected device). Copper cabling is more susceptible to electromagnetic interference (EMI) noise than fiber-optic cable, although our engineering team has routed unshielded Category 5e and 6 cable in high-EMI-generating manufacturing environments, and it has performed with no noticeable degradation in network performance.

    Copper cable is susceptible to thermal noise generated in the cable from routing the cable in high-temperature environments. From a security standpoint, copper cable is less secure. The signals traveling along the cable generates radio frequency (RF) radiation which can be detected. However, fiber-optic cable is not impervious to signal tapping. Although a fiber-optic cable does not radiate an RF signal like a copper cable, a very skilled technician can expose a single strand of optical fiber in a cable, and the fiber can be bent to allow some of the light to escape. This may sound far-fetched, but fiber optic technicians that splice fiber-optic cable actually use a similar technique to inject light and siphon light from a fiber strand while splicing the cable to measure the quality of the splice.

    Fiber-optic glass, which is an excellent dielectric, is effective in providing electrical isolation for the data circuits connected between different buildings on a campus or between communication rooms spread widely apart in a building. The entire cable can be made of dielectric materials, which requires no bonding or grounding. This can help in providing electrical noise isolation for the network equipment and to avoid ground loops.

    A true advantage copper cable has over fiber optics is the ability to transmit power from a communication room to a device. Power over Ethernet (PoE) is a technology allowing a variety of devices to be powered using the same data cable that is used for data transport. There is not enough power in a light signal in a fiber-optic cable to power a device. Fiber-optic end equipment still needs to rely on copper cables for this.

    The largest use of fiber-optic cable in a typical commercial building is for enterprise networks. For industrial and manufacturing facilities, the networks supporting building and manufacturing automation may dominate the enterprise network. For other complex facilities such as a data center or an innovation hub, the building automation system (BAS) can be a significant portion of the enterprise network infrastructure, which is often completely isolated from the data center production network. Instead of a manufacturing-automation network, these types of facilities have networks supporting the data center’s data traffic and collaborative information, respectively.

    For industrial and manufacturing facilities, the networks can be categorized into three major groups: office automation, facility automation, and factory automation.

    In an industrial or manufacturing environment, complicated or custom requirements typically require the use of a programmable logic controller (PLC) or distributed control system (DCS). For DDC, PLC, and DCS data is generated for monitoring and control by a supervisory control and data-acquisition (SCADA) system.

    What all of these networks have in common is that they are using Ethernet as the communication protocol and can be supported by the enterprise information technology (IT) network. The cabling infrastructure provided by IT can handle network protocols other than Ethernet, but the network equipment and architecture typically would not. The IT network may be able to provide a couple of fiber strands in the fiber backbone for a Modbus serial link, but this would not be data going through the IT Ethernet network equipment. IT staff members will design and support the network architecture to provide bandwidth allocation and network segregation between the different networks it hosts. They may choose to physically segregate the networks by allocating separate network equipment and fiber strands for each network type, or multiplex the data onto a few fiber strands using virtual LANs (VLANs) to isolate the network traffic. This is important to note because the network architecture is what determines the cabling architecture and, consequently, the type of fiber-optic cable along with the number of fiber strands that are needed.

    Security systems such as access control and closed-circuit television (CCTV) may or may not be included in the enterprise networks.

    Reply
  21. Tomi Engdahl says:

    Google Partners With CloudFlare, Fastly, Level 3 And Highwinds To Help Developers Push Google Cloud Content To Users Faster
    http://techcrunch.com/2015/09/09/google-partners-with-cloudflare-fastly-level-3-and-highwinds-to-help-developers-push-google-cloud-content-to-users-faster/

    Google shut down its free PageSpeed service last month and with that, it also stopped offering the easy to use content delivery network (CDN) service that was part of that tool. Unlike some of its competitors, Google doesn’t currently offer its own CDN service for developers who want to be able to host their static assets as close to their users as possible. Instead, the company now relies on partners like Fastly to offer CDN services.

    Today, it’s taking these partnerships a step further with the launch of its CDN Interconnect. The company has partnered with CloudFlare, Fastly, Highwinds and Level 3 Communications to make it easier and cheaper for developers who run applications on its cloud service to work with one of these CDNs.

    cloud_interconnect_partnersThe interconnect is part of Google’s Cloud Interconnect Service that lets businesses buy network services that let them connect to Google over enterprise-grade connections or to directly peer with Google at its over 70 global edge locations.

    Developers who use a CDN Interconnect partner to serve their content — and that’s mostly static assets like photos, music and video — are now eligible to pay a reduced rate for egress traffic to these CDN locations.

    Google says the idea here is to “encourage the best practice of regularly distributing content originating from Cloud Platform out to the edge close to your end-users. Google provides a private, high-performance link between Cloud Platform and the CDN providers we work with, allowing your content to travel a low-latency, reliable route from our data centers out to your users.”

    Reply
  22. Tomi Engdahl says:

    Paul Mozur / New York Times:
    CloudFlare transfers technology to Baidu in partnership to make foreign sites more accessible in China, receives revenue split as part of virtual joint venture

    Partnership Boosts Users Over China’s Great Firewall
    http://www.nytimes.com/2015/09/14/business/partnership-boosts-users-over-chinas-great-firewall.html?_r=0

    It is one of the best-guarded borders in the world, and one of the most time-consuming to cross. Yet in the past few months, a new agreement has let people speed over it billions of times.

    The border is the digital one that divides China from the rest of the world. It is laden with inefficiencies and a series of filters known as the Great Firewall, which slows Internet traffic to a crawl as it travels into and out of China.

    Now, a partnership between an American start-up and a Chinese Internet behemoth has created a sort of fast lane to speed traffic across the border. In the process, the two companies are establishing a novel business model with implications for other American technology firms looking to do business in China’s politically sensitive tech industry.

    Using a mixture of CloudFlare’s web traffic technology and Baidu’s network of data centers in China, the two created a service that enables websites to load more quickly across China’s border. The service, called Yunjiasu, began operating in December. It has a unified network that makes foreign sites more easily accessible in China, and allows Chinese sites to run in destinations outside the country.

    Reply
  23. Tomi Engdahl says:

    Telenor Norway projects 2020 switch-off for its 3G network
    But M2M connectivity requirements give 2G network a reprieve until 2025
    http://www.theregister.co.uk/2015/06/04/2020_switch_of_for_telenor_norway_3g/

    Norwegian telco Telenor has outlined plans for switching off 3G, which may be a model for other operators.

    Norweigians love 4G so much that the Telenor group is looking to ape America in the growth of 4G. CEO Berit Svendsen (pictured) says that the network will follow the raid adoption of 4G found in the US and Asia.

    Telenor’s CTO Magnus Zetterberg told a recent investor day that 3G will disappear from the company’s airwaves in 2020, with 2G lasting until five years after that.

    The network already has quite mature 4G, which handles 60 per cent of all mobile data traffic. Although as most networks report, 4G users consume twice as much data as 3G customers, it will still be a minority of people using 4G.

    Voice is still new on 4G, though there is VoLTE – which Telenor has yet to launch – and other VoIP solutions. Nothing handles interconnect and roaming to the level of 3G or 4G, however, so keeping something which allows people to actually speak to one another is a pretty good idea. It’s 2G that wins out, because there are a lot of Machine 2 Machine (M2M) comms which use GSM SMS.

    Reply
  24. Tomi Engdahl says:

    Danish telcos hang up on merger plans after EU pressure
    Three-O2 merger still in balance as EU stance seen as harbinger of things to come
    http://www.theregister.co.uk/2015/09/14/danish_telcos_abandon_merger_plans_after_eu_pressure_in_a_harbinger_of_things_to_come/

    Telenor and TeliaSonera have abandoned their merger plans following pressure from the European Commission.

    In an official statement on Friday, the pair called off the deal as they were unable to meet the Brussels competition watchdog’s demands. The Commish opened an in-depth investigation into the deal in April.

    “The merger discussions have now reached a point where it is no longer possible to gain approval for the proposed transaction,” the companies said this morning (September 11).

    TeliaSonera and Telenor – the second and third-largest players on the Danish market – announced the merger plans in December. But the EU’s competition chief Margrethe Vestager was concerned that consumers would suffer from a lack of competition if the two teamed up.

    Reply
  25. Tomi Engdahl says:

    Home Wi-Fi network will accelerate to 10 gigabit

    25-year-old homes WiFi is the most common way to share your Internet connection to different locations. Now the Internet connection speed of at least not captured in WLAN router speed to catch. Freescale and Quantenna Communcations have developed a router platform, which enables the data to siirroon 10 gigabits per second.

    The companies argue that the new router is also the world’s first 10G Wave sturdy 3 assays router solution. True Quantennan equipment based platform, which enables the transfer of data of five 8 x 8 GHz channel and channel 4 x 4 2.4 gigahertz range.

    In practice, this allows the 12 signal entry into the router terminal, jollin get very close to the maximum speed of 10 Gbps.

    In the future, your home Internet connection is the bottleneck in a completely different place than the Wi-Fi router. The same is of course true for this day. Many router supports, for example, 300 Mbps connections, but at a rate of a few can be contacted on their home.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3311:kodin-wifi-verkko-kiihtyy-10-gigabittiin&catid=13&Itemid=101

    Reply
  26. Tomi Engdahl says:

    Nokia, Ericsson and Intel to cooperate IoT networks

    Competition on radio access technology, which will be billions of IoT devices connected to the internet, is still only beginning. LTE mark an important milestone reached this week, when the 3GPP decide what narrowband technology chosen for LTE IoT version.

    Nokia, Ericsson and Intel believe that LTE tailored for the industrial Internet version of NB-LTE is the best technique for this purpose. NB-LTE’s a tough competitor, for example, Huawei and many others supported by CioT ie Cellular IoT.

    NB-LTE (Narrow Band LTE) is a preferred advantage of feasibility. Link can be implemented, for example, 200 kilohetrsin channel, as IoT devices do not need to transfer large amounts of data.

    LTE networks covering many countries for 90 per cent of the area, so the networks are ready for LTE-based IoT links.

    Intel has promised to bring next year already available in NB-LTE support in chipsets. Internet of things at the moment is well suited for Intel’s portfolio

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3323:nokia-ericsson-ja-intel-yhteistyohon-iot-verkoissa&catid=13&Itemid=101

    Reply
  27. Tomi Engdahl says:

    Broadband Users ‘Need’ At Least 10Mbps To Be Satisfied
    http://tech.slashdot.org/story/15/09/14/2148231/broadband-users-need-at-least-10mbps-to-be-satisfied

    A new report says broadband users need at least 10Mbps speeds to be satisfied with their connection — especially with regards to online video which is now seen as a staple Internet application. Researchers at Ovum measured both objective data such as speed and coverage alongside customer data to give 30 countries a scorecard.

    Broadband Users ‘Need’ Minimum Speed Of 10Mbps
    Read more at http://www.techweekeurope.co.uk/networks/broadband/ovum-broadband-speeds-10mbps-176810#PjDjjJTsdKPSXmPL.99

    Reply
  28. Tomi Engdahl says:

    It’s not broadband if it’s not 10 Mbps, says Ovum
    Prognosticator probes punters, slags slow services
    http://www.theregister.co.uk/2015/09/15/its_not_broadband_if_its_not_10_mbps_says_ovum/

    Market researcher Ovum has trotted around 30 countries worldwide to find out what makes people like their broadband provider, and reckons the minimum download speed to satisfy users is 10 Mbps.

    Based on both market performance data and qualitative surveys with end users, the analyst firm reckoned customers also expect three second page load times, a reliable and stable connection, and decent tech support.

    While UK regulator Ofcom reckons the average broadband user in its jurisdiction gets 23 Mbps,

    Fudzilla adds that customers resent latency, buffering and low picture quality.

    The bad news, if such was needed? As 4K video spreads, customer experience will demand speeds more like 50 Mbps, leaving today’s ADSL2+ services in the shade.

    Reply
  29. Tomi Engdahl says:

    IPv6 is great, says Facebook. For us. And for you a bit, too
    Zuck squad discovers what the rest of the world is already ignoring
    http://www.theregister.co.uk/2015/09/15/facebook_says_ipv6_makes_networks_faster/

    Facebook has wandered down to Speakers’ Corner and climbed onto a fruit-crate to spruik the benefits of the decades-old, much-needed and still-relatively-unused IPv6 protocol.

    With IPv4 addresses just-about-depleted worldwide, Facebook has penned a post telling websites to roll out the protocol, if they haven’t already.

    The post notes that “only 16.3 per cent of the top 1,000 websites have enabled IPv6 (according to Alexa.com). That means there’s another 83.7 per cent that are missing out on the benefits of IPv6”.

    From the point of view of one of the biggest data centre networks in the world, the vast advertising platform is in a decent position to judge, and reckons that site owners who make the change will get the benefit of a speed boost.

    “We’ve observed that accessing Facebook can be 10-15 percent faster over IPv6. We believe other developers will see similar advantages from migrating,” the post states.

    Reply
  30. Tomi Engdahl says:

    IETF doc proposes fix to stop descent into data centre ‘address hell’
    Proxy those MACs if you want manageable address tables, suggest Marvell and Huawei
    http://www.theregister.co.uk/2015/07/09/ietf_doc_proposes_fix_for_data_centre_address_hell/

    Address tables in data centres can fill up really quickly, so researchers from Huawei and Marvell have offered up a proposal to make them smaller.

    The purely-experimental RFC 7586 suggests that all hosts – including VMs – in an access domain be addressed through a proxy.

    The problem the RFC looks at is how to get data from one VM to another, when the subnet the two machines are on span multiple L2/L3 boundaries.

    As the RFC points out, if a VLAN or subnet has lots of hosts spanning different locations, and each access domain (for example different data centres) has hosts belonging to different VLANs, the address tables get very big, very fast.

    Its example is an access switch with 40 physical servers, 100 VMs per server, 4,000 attached MAC addresses, and 200 hosts per VLAN, “this access switch’s MAC address table potentially has 200 * 4,000 = 800,000 entries.”

    Instead, the RFC proposes a Scalable Address Resolution Protocol (SARP), in which a SARP proxy sits in front of the access switch:

    Even if they’re on the same VLAN, hosts on either side of the boundary would use the SARP proxy’s address rather than the host address.

    Reply
  31. Tomi Engdahl says:

    Intel, Nokia, Ericsson square off against Chinese IoT threat
    Proposed NB-LTE narrowband comms standard leaves Huawei on the outer
    http://www.theregister.co.uk/2015/09/15/intel_nokia_ericsson_square_off_against_chinese_iot_threat/

    US and European vendors have linked arms in an effort to set low-bandwidth mobile communications standards.

    Intel, Ericsson and Nokia have thrown their weight behind a standard proposal called Narrow-Band LTE (NB-LTE) to support the comms requirements of Internet of Things devices.

    If adopted – there’s a vote on narrowband technologies slated for a 3GPP meeting next week, as Lightreading reports – NB-LTE would back the US-Euro vendors against the Huawei-led Narrowband Cellular IoT proposal.

    It would also launch yet another Intel attempt to get a foothold in the mobile market, a segment that’s been a persistent disappointment for Chipzilla.

    Intel says it’ll have a 2016 roadmap for NB-LTE products aimed at power-efficient, slim form factor products. Ericsson and Nokia will concentrate on developing the infrastructure side of NB-LTE, hopefully with a minimum of disruption to networks that operators have already deployed.

    In this white paper (PDF), Nokia puts its position that NB-LTE’s 200 kHz channel is optimised for machine-to-machine communications, and can be implemented as a software upgrade to existing base stations.

    NB-LTE’s proponents are targeting the 700-900 MHz spectrum and want their devices to have a battery life of more than ten years.

    http://networks.nokia.com/sites/default/files/document/nokia_lte-m_-_optimizing_lte_for_the_internet_of_things_white_paper.pdf

    Reply
  32. Tomi Engdahl says:

    Boosting Upload Speeds from Smartphones to Networks
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327690&

    Uploading 4K video from a smartphones is the next frontier but what’s slowing down the revolution? Digital signal processing expert Will Strauss explains likely fixes.

    As smartphone users, we are used to watching news videos, downloading apps and games and viewing YouTube clips every day. However, we are increasingly sending our own videos to YouTube, and the company says that 300 hours of video is now uploaded every minute. And that’s in addition to our uploading over 70 million photos a day to Instagram.

    Clearly, users want to share their experiences with others. And it’s interesting to note that uplink traffic increases dramatically at major sports and cultural events, as many want to also share their event experience with their friends. We want to do more with our smartphones; traffic demands are high, but currently the bottleneck is with upload speeds. With powerful 12 MP+ cameras with 4K video now becoming the norm, our networks have to accommodate drastically increasing upload traffic.

    There are three ways to increase LTE upload speeds. The first and most significant increase is through carrier aggregation (CA), the primary method of increasing uplink/uploading speeds. By combining two or more carriers the total bandwidth is expanded. LTE category 6 (Cat 6) is becoming mainstream and provides uplink speeds of 50 Mbps by employing a single 20 MHz carrier. Upcoming Cat 7 employs two 20 MHz uplink carriers, enabling twice the speed to 100 Mbps.

    The second method of increasing uplink speeds is through higher-order modulation. Basic LTE employs Quadrature Amplitude Modulation (QAM)
    16 QAM enables two 20 MHz bands bonded through carrier aggregation to provide uplink speeds of 100 Mbps. By employing 64 QAM, uplink speeds are increased by 50% to 150 Mbps.

    The third technique of increasing uplink speeds is through Qualcomm’s proprietary Uplink Data Compression (UDC) from the smartphone. The innovative Qualcomm modem-driven solution provides additional uplink gains. Basically, by compressing all uplink TCP/UDP headers and data fewer bits are transmitted in the uplink channel.

    Qualcomm’s UDC compression gains vary by application, with web browsing, for example, providing an impressive 70% compression gain. Since fewer bits are transmitted over the band, it reduces potential interference to other traffic.

    LTE networks come in two flavors: frequency division duplex (FDD) and time division duplex (TDD).

    LTE-Advanced modems that support Cat 6 and 7 uplink speeds of 100 and 150 Mbps, respectively, will enable smartphone suppliers to provide technology leadership and product differentiation through early adoption of leading-edge products. Network operators gain greater uplink capacity and efficiency enabling them to provide differentiated service offerings.

    Reply
  33. Tomi Engdahl says:

    Wi-Gig signals are bouncing off the walls, can’t settle on the sofa
    Boffins check out 60 GHz radio around the home and find it’s not yet fit for domestic duties
    http://www.theregister.co.uk/2015/09/16/hang_on_wigig_fans_are_you_ready_to_design_networks_that_will_work/

    “Millimetre-wave” wireless technologies (such as un 802.11ad) are seen by vendors as a key part of future in-home connectivity, but there’s a lot of work to be done to actually make it work.

    That’s the conclusion of a group of University of Buffalo boffins, who ran a series of tests on 60 GHz wireless systems to see how they perform in the real world.

    The 60 GHz spectrum is eyed greedily for many reasons: the more traffic we can move away from the familiar cellular bands – 700 MHz, 1.2 GHz, 1.8 GHz, 3.2 GHz and so on – the more spectrum is available for mobiles that can’t use the short-range millimetre-wave technology.

    Also, 60 GHz spectrum gives you a lot of room for very wide, and therefore very fast, channels; and third, its short range means there’s less worry that your neighbour’s access point will create noise that slows down your network.

    The only problem, according to this paper at ArXiv, is that we’re not yet very good at using 60 GHz systems. The university’s Swetank Saha, Viral Vijay Vira and Anuj Garg tested 60 GHz systems in a couple of quite simple configurations – “room” and “corridor” – and pinned down several problems they hope will help inform future designers.

    In particular, they write, distance and line-of-sight are challenges.

    Orientation: It sounds obvious, but at 60 GHz using current off-the-shelf kit, misalignment between transmitter and receiver can kill the communication entirely. Reflection helps (because the multi-in-multi-out, MIMO, antennas are designed to do some beam-steering), and hard surfaces do that better than soft.

    Transmitter height: with a receiver fixed at 2’6” (762 mm), the researchers tested the transmitter at between 2’6” and 6’6” (762 and 1981mm). As the excerpt of the results in the image below shows, the combination of height and orientation gave wildly variable performance.

    Distance: Interestingly, because of the wireless kit’s use of beam-steering and MIMO, distance effects are more complex than a linear collapse in throughput.

    Reply
  34. Tomi Engdahl says:

    Arista joins data centre 25/50/100 Gbps switch battle
    Broadcom’s Tomahawk underpins high-speed low-cost iron
    http://www.theregister.co.uk/2015/09/16/arista_data_centre_switch_battle/

    Arista Networks has joined the race to get 25 Gbps Ethernet kit into hyperscale data centres.

    The company, a member of the cabal that last year kicked off the 25/50/100 Gbps effort along with Google, Microsoft, Broadcom, and Mellanox, has now lifted the cloth from a range of switches in leaf and spine form factors.

    Using the Tomahawk switches that Broadcom unveiled in September 2014, Arista is pitching a 1RU 32-port switch, and a couple of 2U modular switches.

    The company’s director of product management told The Register’s HPC sister publication The Platform that the 32-port unit will ship for under US$1,000 per port, while the bigger units will roll out at around $2,000 per port.

    Reply
  35. Tomi Engdahl says:

    Here’s the Real Way to Get Internet to the Next 4 Billion People
    http://www.wired.com/2015/09/heres-real-way-get-internet-next-4-billion-people/

    Around 3.2 billion people have access to the Internet. That’s amazing, but it’s fewer than half of the 7 billion or so people on earth. And while Internet access was once a luxury, it is quickly becoming essential as the world’s commerce, educational resources, and entertainment move online.

    Fortunately, there’s no shortage of schemes to bring Internet to underserved countries, ranging from low-orbit satellites to high-altitude balloons to drones. Some analysts have criticized these projects, arguing they won’t deliver Internet access at prices people in the developing world can afford.

    Even as billionaires like Elon Musk and Mark Zuckerberg plot to wire the unwired, people in these countries, with a little help from outside companies and investors, are quickly and quietly building their own Internet infrastructure. And they’re doing it using fairly rudimentary methods: by trenching pipes and building cell towers. They have a long way to go, but they’re already proving remarkably successful.

    A major problem in emerging countries is that when Internet access is available, it’s often expensive. That’s due in part to a lack of competition among providers

    a potential market for wireless data services. “Nearly everyone had mobile phones, but hardly anyone had access to the Internet,”

    Undersea cables have brought a 20-fold increase in overall bandwidth to the African continent in the past five years

    “The cost has fallen from thousands of dollars per megabit to $50 to $100 per megabit,”

    These cables brought faster connections to coastal countries like Ghana, Nigeria, and South Africa first

    Building this new infrastructure is not cheap. Seacom cost $650 million to build, and MainOne cost $240 million. And that does not include the cost of building pipes into the center of the country. That makes the idea of using satellites to blanket the planet with Internet access sound particularly appealing. The question, though, is whether anyone can make satellites cost-effective.

    By placing satellites in low-earth orbit—roughly 100 to 1,250 miles overhead—these companies say they can provide access that is far faster than traditional satellite Internet, and with less latency

    companies tend to only consider the space-side costs of running an Internet business, such as launch costs and the satellites themselves. “They don’t think of the ground stations, the real estate for gateways, the software that makes the whole system work,”

    Still, $10 billion might not sound bad for a service that, unlike an undersea cable, would provide worldwide access. And OneWeb and SpaceX have a significant advantage in that they can learn from the mistakes of those who have come before them.

    nd even though launching satellites may sound like less of a legal hassle than digging trenches for Internet cables, satellite companies face their own regulatory issues, including the possibility of being required to censor content for local governments. “By international law you can only provide service into a particular territory if you have permission from the sovereign country,” Rusch explains.

    Add these issues up, and there’s a real danger these satellite companies will have to charge more than people in developing countries can afford or are willing to pay

    it’s unwise to put all of the world’s hopes for connectivity in the hands of one or two US companies

    Reply
  36. Tomi Engdahl says:

    The Finnish State intends to become Internet operator – If commercial companies are not providing fiberbroadband

    Communications lines that the state can start selling fiber connectivity to the people, if the telecom operators do not cure rate.

    The Ministry of Transport and Communications of discussion is that the state network company Cinia could sell broadband subscriptions directly from citizens and businesses.

    - We have heard of cases in which the telecommunications operator is not necessarily even a fiber optic connection offer, says the director of communications policy department Laura Vilkkonen Transport and Communications Ministry.

    - Sometimes it seems even the biggest operators want at the moment only sell mobile subscriptions.

    The optical fiber-based access availability lived in the area at the end of last year only just over half of Finnish households. Share in Helsinki was about 55 per cent, in Espoo, about 35 per cent and in Turku, about 75 percent of households.

    The figures are far from the Ministry to impose the 95 per cent target.

    Source: http://www.tekniikkatalous.fi/talous_uutiset/suomen-valtio-aikoo-internetoperaattoriksi-jos-kaupalliset-yhtiot-eivat-tarjoa-laajakaistaa-3483485

    Reply
  37. Tomi Engdahl says:

    Shalini Ramachandran / Wall Street Journal:
    Comcast to create new unit to offer data services to Fortune 1000 businesses nationwide, including those outside its service area

    Comcast to Sell Data Services to Big Firms Nationwide
    Cable giant aims to offer alternative to telecom providers AT&T, Verizon
    http://www.wsj.com/article_email/comcast-to-sell-data-services-to-big-firms-nationwide-1442376240-lMyQjAxMTI1NDE4NjcxMDY3Wj

    Comcast Corp. said it would start selling Internet and phone services to large businesses nationwide, even those located outside its service area, as it seeks to steal away more customers from telecom providers like AT&T Inc. and Verizon Communications Inc.

    The cable giant unveiled plans Wednesday to create a new unit to offer data services to Fortune 1000 businesses across the country, including those located in other cable companies’ territories. Comcast said it has struck wholesale agreements with cable operators including Cox Communications Inc., Time Warner Cable Inc., Charter Communications Inc., Cablevision Systems Corp. and Mediacom Communications Corp., to offer services using their pipes.

    Comcast says it is seeking to bring together the cable industry to provide a meaningful alternative to AT&T and Verizon, the longtime incumbents in the market for selling network services to businesses.

    Because cable companies are regional by nature, they haven’t been able to offer one-stop-shop, nationwide offerings for big enterprises.

    Reply
  38. Tomi Engdahl says:

    Microsoft has developed its own Linux. Repeat. Microsoft has developed its own Linux
    Redmond reveals Azure Cloud Switch, its in-house software-defined networking OS
    http://www.theregister.co.uk/2015/09/18/microsoft_has_developed_its_own_linux_repeat_microsoft_has_developed_its_own_linux/

    Sitting down? Nothing in your mouth?

    Microsoft has developed its own Linux distribution. And Azure runs it to do networking.

    Redmond’s revealed that it’s built something called Azure Cloud Switch (ACS), describing it as “a cross-platform modular operating system for data center networking built on Linux” and “our foray into building our own software for running network devices like switches.”

    Kamala Subramanian, Redmond’s principal architect for Azure Networking, writes that: “At Microsoft, we believe there are many excellent switch hardware platforms available on the market, with healthy competition between many vendors driving innovation, speed increases, and cost reductions.”

    “However, what the cloud and enterprise networks find challenging is integrating the radically different software running on each different type of switch into a cloud-wide network management platform. Ideally, we would like all the benefits of the features we have implemented and the bugs we have fixed to stay with us, even as we ride the tide of newer switch hardware innovation.”

    (Translation: Software-defined networking (SDN) is a very fine idea.)

    But it appears Redmond couldn’t find SDN code to fits its particular needs, as it says ACS “… focuses on feature development based on Microsoft priorities” and “allows us to debug, fix, and test software bugs much faster. It also allows us the flexibility to scale down the software and develop features that are required for our datacenter and our networking needs.”

    ACS is designed to use the Switch Abstraction Interface (SAI), an OpenCompute effort that offers an API to program ASICs inside network devices.

    That experience clearly includes Linux, not Windows, as the path to SDN.

    Satya Nadella’s Microsoft is a very different animal, unafraid to use any technology if it gets the job done. But Microsoft building a Linux? Wow. Just wow.

    Reply
  39. Tomi Engdahl says:

    News & Analysis
    60 GHz Tested for 5G Backhaul
    http://www.eetimes.com/document.asp?doc_id=1327720&

    A €7.3 million euro program will explore options for communications infrastructure for 5G cellular networks. The 5G-XHaul Project based in Bristol, UK, will test millimeter wave and fibre optic backhaul networks.

    5G-XHaul will use millimeter wave (mmW) modems from Blu Wireless along with fiber optic components in a trial network that aims for less than 1 millisecond latency and up to 10 Gbits/second of data by 2018. The existing wireless network can currently process about 1 Gbit/s.

    “Backhaul is a major pain point in [cellular] networks, so with 5G we want to make sure that it is done more efficiently,” said Mark Barrett, chief marketing officer for Blu Wireless.

    Bristol has many media companies – including a BBC site and Aardman Animation (of Wallace and Gromit fame) – which generate many terabytes of imaging data. “They have a problem, and have to shift that around between different development sites and…have demand for more broadband…The city also has a fiber optic network that wasn’t being heavily used,” said Barrett.

    Blu’s current offering is a 60 GHz wireless modem based on its Hydra PHY/MAC with a phased array antenna. Barrett said the modem could be put on a lamp post or used together with mesh networking. The test network will rely heavily on a software-defined network using the OpenFlow protocol as a control mechanism.

    “We see a lot of interest in mmW [for backhaul] from operators in Europe,” Barrett said. “One significant difference between the U.S. and Europe is in the regulatory regime; in Europe it’s much more difficult,” he said.

    Europe currently requires that antenna gain must exceed +30 dBi and conducted power must be under +10 dBm, EE Times Europe reported, and that can affect interference. Barrett said Blu Wireless developed a baseband DSP that uses a “hybrid software-hardware architecture based on parallel processor techniques” to reduce interference.

    Field trials are already underway with Blu Wireless working with Huawei, Telefonica I+D, TES Electronic Solutions, the University of Bristol and others. The 5G-XHaul effort is part of the 5G Infrastructure Public Private Partnership (5G-PPP), a joint initiative between the European information and communications industry and the European Commission.

    Reply
  40. Tomi Engdahl says:

    T-Mobile Expands International Coverage to 20 More Countries
    http://www.pcmag.com/article2/0,2817,2491455,00.asp

    T-Mobile on Thursday added 20 new countries and destinations to its Simple Global feature.

    With the expansion, the program now offers subscribers unlimited data and texting, plus calls for $0.20 a minute in 145 countries and destinations, including all of Europe and South America. The expansion perhaps most notably includes the Bahamas, “where more than 2 million Americas travel each year,” T-Mobile said.

    “We’ve just made your traveling even easier in 20 more destinations around the world, expanding Simple Global to cover all of Europe and all of South America,” T-Mobile President and CEO John Legere said in a statement. “The carriers have made billions overcharging consumers who just want to stay connected overseas, and we’ve changed all that! Today, we made it even simpler to text, search or keep up on social media in a total of 145 countries and destinations, all at no extra cost!”

    T-Mobile said Simple Global now covers more than 90 percent of the trips Americans take abroad each year. The self-proclaimed “un-carrier” added that Simple Global has been one of its “most loved moves.”

    The Simple Global feature is available at no extra charge with a qualifying Simple Choice Plan.

    Reply
  41. Tomi Engdahl says:

    US govt: Why we’re OK with letting control of the internet slip into ICANN’s hands
    But suggests a little structure around its evaluation
    http://www.theregister.co.uk/2015/09/19/us_gao_green_lights_iana_transition/

    A highly anticipated report from the US Government Accountability Office (GAO) has given the green light to a shift of critical internet functions away from the government to domain overseer ICANN.

    The report [PDF], published Friday, provides a neutral and explanatory rundown of the decision by the Department of Commerce (DoC) to transition the IANA functions out of its control, as well as the subsequent process that has been followed to devise a replacement for the US government role.

    Significantly, it notes that all federal agencies are in line with the move and that there is significant support for the transition from other key groups such as the internet community and business.

    Is that it?

    The report will be a disappointment to some, who hoped that it would give sufficient ammunition to delay or disrupt the IANA transition process.

    Reply
  42. Tomi Engdahl says:

    60 GHz Tested for 5G Backhaul
    http://www.eetimes.com/document.asp?doc_id=1327720&

    A €7.3 million euro program will explore options for communications infrastructure for 5G cellular networks. The 5G-XHaul Project based in Bristol, UK, will test millimeter wave and fibre optic backhaul networks.

    5G-XHaul will use millimeter wave (mmW) modems from Blu Wireless along with fiber optic components in a trial network that aims for less than 1 millisecond latency and up to 10 Gbits/second of data by 2018. The existing wireless network can currently process about 1 Gbit/s.

    “Backhaul is a major pain point in [cellular] networks, so with 5G we want to make sure that it is done more efficiently,” said Mark Barrett, chief marketing officer for Blu Wireless.

    Bristol has many media companies – including a BBC site and Aardman Animation (of Wallace and Gromit fame) – which generate many terabytes of imaging data. “They have a problem, and have to shift that around between different development sites and…have demand for more broadband…The city also has a fiber optic network that wasn’t being heavily used,” said Barrett.

    Blu’s current offering is a 60 GHz wireless modem based on its Hydra PHY/MAC with a phased array antenna. Barrett said the modem could be put on a lamp post or used together with mesh networking. The test network will rely heavily on a software-defined network using the OpenFlow protocol as a control mechanism.

    Reply
  43. Tomi Engdahl says:

    WiGig Solution: Enabling Wire-free and Cable-free User Experiences
    http://www.eeweb.com/company-blog/socionext/wigig-solution-enabling-wire-and-cable-free-user-experiences/

    Socionext offers a next generation super hi-speed Wi-Fi 802.11ad module that can transfer up to 1.7Gbps and is capable of 4K video streaming over Wi-Fi. It is world’s smallest module and includes an RF chip, baseband chip and antenna, with support for the USB 3.0 interface. The module can be used in all applications requiring huge data transfer wirelessly.

    This super high-speed Wi-Fi: 802.11ad allows devices to communicate wire- and cable-free at much faster speeds than today’s wireless rates. WiGig operates at 60GHz, a much higher frequency band and supports multi-gigabit data rate transfer.

    Uses for the new WiGig technology include instant wireless sync and backup between devices, and also streaming of UHD 4K video.

    Applications

    File exchange (movies & photos)
    Mobile – computer file synchronization
    Distribution of ads
    Content shopping

    http://www.eeweb.com/images/pdfs/WiGig_fact_sheet.pdf

    Reply
  44. Tomi Engdahl says:

    Open Source Code May Unite IoT
    Networking project spawns IoT middleware
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327732&

    A high profile open source project working on software-defined networks has given birth to what could become an important standard for bringing unity to the fragmented Internet of Things.

    A robust middleware platform can unlock innovation and fulfill the promise of the Internet of Things. Such an approach is the IoT Data Management (IoTDM) project, an open source middleware solution recently started at the Linux Foundation under the auspices of the OpenDaylight project.

    OpenDaylight is the leading open source platform for software-defined networking (SDN). Its latest release is expected to be embedded in over 20 commercial products, and it is being embraced by other open source projects including the Open Platform for Network Function Virtualization (NFV) and OpenStack.

    The core OpenDaylight software allows computer networking applications to intelligently access and configure hardware network elements. Similarly, IoTDM provides a service layer that acts as an IoT data broker and enables authorized applications to post and retrieve IoT data uploaded by any device.

    IoTDM is compliant with the oneM2M effort which provides an architectural framework for connecting disparate devices via a common service layer where a given application can be associated with a dynamic network of users, devices and sensor data. The service layer allows users and operators to control, for example, how often a remote sensor captures data or to reconfigure devices with a needed security update. The oneM2M project is backed by more than 200 technology companies, standards bodies and government agencies.

    The IoTDM platform can be configured for the needs of various use cases. It can deliver only IoT data collection capabilities where it is deployed near IoT devices and its footprint needs to be small; or it can be configured to run as a large, distributed cluster with IoT, SDN and NFV functions enabled and deployed in a big data center.

    Reply
  45. Tomi Engdahl says:

    Multiport VNA improves test throughput
    http://www.edn.com/electronics-products/other/4440371/Multiport-VNA-improves-test-throughput?_mc=NL_EDN_EDT_EDN_productsandtools_20150921&cid=NL_EDN_EDT_EDN_productsandtools_20150921&elq=c33269e774bf4cdc87a78244c14edc43&elqCampaignId=24856&elqaid=28195&elqat=1&elqTrackId=d016fc5a9f3d4ab69f23d340b2fdefde

    Covering a frequency range of 1 MHz to 9 GHz, the M9485A PXIe vector network analyzer (VNA) from Keysight Technologies allows S-parameter measurements on 12 ports in one chassis and 24 ports in two chassis. According to the manufacturer, the multiport architecture of the M9485A achieves measurement speeds that are up to 30% faster than competing offerings, while maintaining high dynamic range.

    The M9485A is intended for high-volume wireless component manufacturing of front-end modules, switches, and filters used in mobile phones and base stations. With its multiport capability, all receivers synchronize with a common source to measure all S-parameters at once.

    Multichannel PXI switches perform fault insertion
    http://www.edn.com/electronics-products/other/4440385/Multichannel-PXI-switches-perform-fault-insertion?_mc=NL_EDN_EDT_EDN_productsandtools_20150921&cid=NL_EDN_EDT_EDN_productsandtools_20150921&elq=c33269e774bf4cdc87a78244c14edc43&elqCampaignId=24856&elqaid=28195&elqat=1&elqTrackId=9e03e7079b0348008c7d62c9fd49cdb9

    Pickering’s latest PXI switch modules, the 40-200 and 40-201, allow the introduction of fault connections for testing differential serial interfaces. The modules allow manufacturers to simulate communication failures and other interruptions to ensure the response of safety-critical communications systems used in automotive and aerospace environments.

    Reply
  46. Tomi Engdahl says:

    Home> Tools & Learning> Products> Product Brief
    LTE modems come in small LGA packages
    http://www.edn.com/electronics-products/other/4440383/LTE-modems-come-in-small-LGA-packages?_mc=NL_EDN_EDT_EDN_productsandtools_20150921&cid=NL_EDN_EDT_EDN_productsandtools_20150921&elq=c33269e774bf4cdc87a78244c14edc43&elqCampaignId=24856&elqaid=28195&elqat=1&elqTrackId=4f0e3ecf972542c1ba477f55e1d9add4

    Swiss manufacturer u-blox offers LTE low-date-rate cellular modules supporting LTE Cat 1 for IoT and M2M designs in the industrial and automotive markets. Targeting North American carriers, the TOBY-R201 and LARA-R200 are housed in very small LGA packages and are a good choice for carriers that are transitioning to LTE from 2G and 3G.

    The TOBY-R201 (LTE bands 2, 4, 13, 17 and HSPA bands 2, 5) is a multi-mode, multi-carrier LTE Cat 1 module with HSPA fallback for North America. Its form factor in a 24.8×35.6-mm2 LGA package is the same as the TOBY-L2 Cat 4 module intended for applications requiring high data rates. First samples of the TOBY-R201 will be available in October 2015.

    The LARA-R200 (LTE bands 4, 13) and LARA-R202 (LTE bands 2, 4, 17) are LTE Cat 1 modules for the largest North American carriers

    Reply
  47. Tomi Engdahl says:

    TIA issues standard for calibration of fiber-optic power meters
    http://www.cablinginstall.com/articles/2015/09/tia-fo-power-meters.html?cmpid=EnlCIMSeptember212015&eid=289644432&bid=1181894

    The Telecommunications Industry Association (TIA), which develops standards for the information and communications technology industry, has released a new document, TIA-455-231, FOTP-231 IEC 61315 – Calibration of Fibre-Optic Power Meters.

    The new international standard TIA-455-231 is applicable to instruments measuring radiant power emitted from sources which are typical for the fiber-optic communications industry. It also describes the calibration of power meters to be performed by calibration laboratories or by power meter manufacturers.

    Reply
  48. Tomi Engdahl says:

    Siemon expands ruggedized connectivity product line
    http://www.cablinginstall.com/articles/2015/09/siemon-expands-ruggedized.html

    Global network infrastructure specialist Siemon has announced an expansion of its Ruggedized Connectivity product line for harsh environments, including the company’s new ruggedized G2 LC fiber adapters; ruggedized Category 6 UTP patch cords; and DIN rail patch panels:

    – An addition to Siemon’s existing Ruggedized LC Fiber Solution, the new Gen2 (G2) Ruggedized LC adapters combine the premium performance of Siemon’s LC fiber connectivity with durable, proven IP66/IP67 ruggedized shells to provide a best-in-class fiber connectivity solution for harsh environments, asserts the company.

    – Siemon’s new Ruggedized Category 6 UTP patch cords are constructed using a flame-retardant thermoplastic elastomer (TPE) outer jacket over a polyvinyl chloride (PVC) inner jacket that allows for a 60% greater temperature range than standard commercial-grade cords. They are available with a Ruggedized IP66/IP67 plug on one end and either a Ruggedized IP66/IP67 or standard modular RJ45 plug on the other end.

    Combined with a -40 to 75°C (-40 to 167°F) temperature range, high flex construction, oil resistant jacket and an indoor/outdoor rating, these cords are ideal for providing end-to-end category 6 channel performance in harsh environments such as outdoor kiosks and security cameras.

    – Ideal for mounting inside control cabinets, equipment enclosures or onto other standard 35 mm DIN rails, Siemon’s new DIN rail patch panels offer quick and easy patching for fiber or copper-based Industrial Ethernet applications

    Reply
  49. Tomi Engdahl says:

    Over 4 billion people will go without internet access this year
    http://www.engadget.com/2015/09/21/un-broadband-report-2015/

    The tech industry likes to talk a lot about a connected world, but just how many people are online, really? Most of them aren’t, unfortunately. The United Nations’ Broadband Commission has released a 2015 report which estimates that 57 percent of the human population (about 4.2 billion people) won’t have regular internet access by the end of 2015. Not surprisingly, the likelihood that you’ll have access is highly dependent on your economic and social opportunities. Over 80 percent of people in fully developed countries currently have connections, but that number plummets to 6.7 percent in the poorest nations; gender inequality only makes it worse.

    Free or low-cost internet efforts from companies like Facebook and Google might help, but the UN believes that the real solution is much more comprehensive.

    The good news? The situation should get better. About 60 percent of the world should have access by 2021, helped by a big spike in mobile internet use — the number of mobile data subscriptions should come close to matching those of regular cellphone subscriptions by 2020.

    The State of Broadband 2015
    http://www.broadbandcommission.org/Documents/reports/bb-annualreport2015.pdf

    Reply
  50. Tomi Engdahl says:

    Cisco shocker: Some network switches may ELECTROCUTE you
    Buzz, buzz: RTFM, people
    http://www.theregister.co.uk/2015/09/22/cisco_switch_screw_problem/

    Oh dear: Cisco is warning that screws in a couple of its compact Catalyst switches may be poking into wires carrying live voltages.

    In this field note, the Borg says the problem occurs when WS-C3560CX or WS-C2960CX switches are installed without a mounting tray – for example, screwed to a desk, shelf, or wall.

    Screws not installed to the correct depth, “coupled with appreciable force in order to mount the switch, might cause the insulator to be punctured, which exposes a voltage circuit,” the note states.

    How can you tell if the switch was installed correctly? Cisco helpfully notes that one good test is “the switch currently works.”

    That’s because the screws are earthed to the case. If you’re installing the switch and over-tightening the screws with power turned on, the screw might touch the voltage circuit which “might result in an electrical shock for a very short time period before the unit fails.”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*