Electronics trends for 2016

Here is my list of electronics industry trends and predictions for 2016:

There was a huge set of  mega mergers in electronics industry announced in 2015. In 2016 we will see less mergers and how well the existing mergers went. Not all of the major acquisitions will succeed. Probably the the biggest challenge in these mega-mergers is “creating merging cultures or–better yet–creating new ones”.

Makers and open hardware will boost innovation in 2016. Open source has worked well in the software community, and it is coming more to hardware side. Maker culture encourages people be creators of technology rather than just consumers of itA combination of the maker movement and robotics is preparing children for a future in which innovation and creativity will be more important than ever: robotics is an effective way for children as young as four years old to get experience in the STEM fields of science, technology, engineering, mathematics as well as programming and computer science. The maker movement is inspiring children to tinker-to-learn. Popular DIY electronics platforms include Arduino, Lego Mindstorms, Raspberry Pi, Phiro and LittleBits. Some of those DIY electronics platforms like Arduino and Raspberry Pi are finding their ways into commercial products for example in 3D printing, industrial automation and Internet of Things application fields.

Open source processors core gains more traction in 2016. RISC-V is on the march as an open source alternative to ARM and Mips. Fifteen sponsors, including a handful of high tech giants, are queuing up to be the first members of its new trade group for RISC-V. Currently RISC-V runs Linux and NetBSD, but not Android, Windows or any major embedded RTOSes. Support for other operating systems is expected in 2016. For other open source processor designs, take a look at OpenCores.org, the world’s largest site/community for development of hardware IP cores as open source.

crystalball

GaN will be more widely used and talked about in 2016. Gallium nitride (GaN) is a binary III/V direct bandgap semiconductor commonly used in bright light-emitting diodes since the 1990s. It has special properties for applications in optoelectronic, high-power and high-frequency devices. You will see more GaN power electronics components because GaN – in comparison to the best silicon alternative – will enable higher power density through the ability to switch at high frequencies. You can get GaN devices for example from GaN Systems, Infineon, Macom, and Texas Instruments. The emergence of GaN as the next leap forward in power transistors gives new life to Moore’s Law in power.

Power electronics is becoming more digital and connected in 2016. Software-defined power brings to bear critical need in modern power systems. Digital Power was the beginning of software-defined power using a microcontroller or a DSP. Software-defined power takes this to another level. Connectivity is the key to success for software-defined power and the PMBus will enable the efficient communication and connection between all power devices in computer systems. It seems that power architectures to become software defined, which will take advantage of digital power adaptability and introduce software control to manage the power continuously as operating conditions change. For example  adaptive voltage scaling (AVS) is supported by the AVSBus is contained in the newest PMBus standard V 1.3. The use of power-optimization software algorithms and the concept of the Software Defined Power Architecture (SDPA) are all being seen as part of a brave new future for advanced board-power management.

Nanowires and new forms of memory like RRAM (resistive random access memory) and spintronics are also being researched, and could help scale down chips. Many “exotic” memory technologies are in the lab, and some are even in shipping product: Ferroelectric RAM (FRAM), Resistive RAM (ReRAM), Magnetoresistive RAM (MRAM), Nano-RAM (NRAM).

Nanotube research has been ongoing since 1991, but there has been long road to get practical nanotube transistor. It seems that we almost have the necessary parts of the puzzle in 2016. In 2015 IBM reported a successful auto-alligment method for placing them across the source and drain. Texas Instruments is now capable of growing wafer scale graphene and the Chinese have taken the lead in developing both graphene and nanotubes according to Lux Research.

While nanotubes provide the fastest channel material available today, III-V materials like gallium arsenide (GaAs) and indium gallium arsenide (InGaAs) are all being explored by IBM, Intel, Imec and Samsung as transistor channels on silicon substrates. Dozen of researchers worldwide are experimenting with black phosphorus as an alternative to nanotubes and graphene for the next generation of semiconductors. Black phosphorus has the advantage of having a bandgap and works well alongside silicon photonics device. 3-Molybdenum disulphide MoS2 is also a contender for the next generation of semiconductors, due to its novel stacking properties.

Graphene has many fantastic properties and there has been new finding in it. I think it would be a good idea to follow development around magnetized graphene. Researchers make graphene magnetic, clearing the way for faster everything. I don’t expect practical products in 2016, but maybe something in next few years.

Optical communications is integrating deep into chips finally. There are many new contenders on the horizon for the true “next-generation” of optical communications with promising technologies in development in labs and research departments around the world. Silicon photonics is the study and application of photonic systems which use silicon as an optical medium. Silicon photonic devices can be made using existing semiconductor fabrication. Now we start to have technology to build optoelectronic microprocessors built using existing chip manufacturing. Engineers demo first processor that uses light for ultrafast communications. Optical communication could also potentially reduce chips’ power consumption on inter-chip-links and enable easily longer very fast links between ICs where needed. Two-dimensional (2D) transition metal dichalcogenides (TMDCs), which may enable engineers to exceed the properties of silicon in terms of energy efficiency and speed, moving researchers toward 2D on-chip optoelectronics for high-performance applications in optical communications and computing. To build practical systems with those ICs, we need to figure out how make easily fiber-to-chip coupling or how to manufacture practical optical printed circuit board (O-PCB).

Look development at self-directed assembly.Researchers from the National Institute of Standards and Technology (NIST) and IBM have discovered a trenching capability that could be harnessed for building devices through self-directed assembly. The capability could potentially be used to integrate lasers, sensors, wave guides and other optical components into so called “lab-on-a-chip” devices.

crystalball

Smaller chip geometries are come to mainstream in 2016. Chip advancements and cost savings slowed down with the current 14-nanometer process, which is used to make its latest PC, server and mobile chips. Other manufacturers are catching to 14 nm and beyond. GlobalFoundries start producing a central processing chip as well as a graphics processing chip using 14nm technology. After a lapse, Intel looks to catch up with Moore’s Law again with with upcoming 10-nanometer and 7-nm processes. Samsung revealed that it will soon begin production of a 10nm FinFET node, and that the chip will be in full production by the end of 2016. This is expected to be at around the same time as rival TSMC. TSMC 10nm process will require triple patterning. For mass marker products it seems that 10nm node, is still at least a year away. Intel delayed plans for 10nm processors while TSMC is stepping on the gas, hoping to attract business from the likes of Apple. The first Intel 10-nm chips, code-named Cannonlake, will ship in 2017.

Looks like Moore’s Law has some life in it yet, though for IBM creating a 7nm chip required exotic techniques and materials. IBM Research showed in 2015 a 7nm chip will hold 20 billion transistors manufactured by perfecting EUV lithography and using silicon-germanium channels for its finned field-effect transistors (FinFETs). Also Intel revealed that the end of the road for Silicon is nearing as alternative materials will be required for the 7nm node and beyond. Scaling Silicon transistors down has become increasingly difficult and expensive and at around 7nm it will prove to be downright impossible. IBM development partner Samsung is in a race to catch up with Intel by 2018 when the first 7nm products are expected. Expect Silicon Alternatives Coming By 2020One very promising short-term Silicon alternative is III-V semiconductor based on two compounds: Indium gallium arsenide ( InGaAs ) and indium phosphide (InP). Intel’s future mobile chips may have some components based on gallium nitride (GaN), which is also an exotic III-V material.

Silicon and traditional technologies continue to be still pushed forward in 2016 successfully. It seems that the extension of 193nm immersion to 7nm and beyond is possible, yet it would require octuple patterning and other steps that would increase production costs. IBM Research earlier this year beat Intel to the 7nm node by perfecting EUV lithography and using silicon-germanium channels for its finned field-effect transistors (FinFETs). Taiwan Semiconductor Manufacturing Co. (TSMC), the world’s largest foundry, said it has started work on a 5nm process to push ahead its most advanced technology. TSMC’s initial development work at 5nm may be yet another indication that EUV has been set back as an eventual replacement for immersion lithography.

It seems that 2016 could be the year for mass-adoption of 3D ICs and 3D memory. For over a decade, the terms 3D ICs and 3D memory have been used to refer to various technologies. 2016 could see some real advances and traction in the field as some truly 3D products are already shipping and more are promised to come soon. The most popular 3D category is that of 3D NAND flash memory: Samsung, Toshiba, Sandisk, Intel and Micron have all announced or started shipping flash that uses 3D silicon structure (we are currently seeing 128Gb-384Gb parts). Micron’s Hybrid Memory Cube (HMC) uses stacked DRAM die and through-silicon vias (TSVs) to create a high-bandwidth RAM subsystem with an abstracted interface (think DRAM with PCIe). Intel and Micron have announced production of a 3D crosspoint architecture high-endurance (1,000× NAND flash) nonvolatile memory.

The success of Apple’s portable computers, smartphones and tablets will lead to the fact that the company will buy as much as 25 per cent of world production of mobile DRAMs in 2016. In 2015 Apple bought 16.5 per cent of mobile DRAM.

crystalball

After COP21 climate change summit reaches deal in Paris environmental compliance 2016 will become stronger business driver. Increasingly, electronics OEMs are realizing that environmental compliance goes beyond being a good corporate citizen. On the agenda for these businesses: climate change, water safety, waste management, and environmental compliance. Keep in mindenvironmental compliance requirements that include the Waste Electrical and Electronic Equipment (WEEE) directive, Restriction of Hazardous Substances Directive 2002/95/EC (RoHS 1), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH). It’s a legal situation: If you do not comply with regulatory aspects of business, you are out of business. Some companies are leading the parade toward environmental compliance or learning as they go.

Connectivity is proliferating everything from cars to homes, realigning diverse markets. It needs to be done easily for user, reliably, efficiently and securely.It is being reported that communications technologies are responsible for about 2-4% of all of carbon footprint generated by human activity. The needs for communications and faster speeds is increasing in this every day more and more connected world – penetration of smart devices there was a tremendous increase in the amount of mobile data traffic from 2010 to 2014.Wi-Fi has become so ubiquitous in homes in so many parts of the world that you can now really start tapping into that by having additional devices. When IoT is forecasted to be 50 billion connections by 2020, with the current technologies this would increase power consumption considerably. The coming explosion of the Internet of Things (IoT) will also need more efficient data centers that will be taxed to their limits.

The Internet of Things (IoT) is enabling increased automation on the factory floor and throughout the supply chain, 3D printing is changing how we think about making components, and the cloud and big data are enabling new applications that provide an end-to-end view from the factory floor to the retail store. With all of these technological options converging, it will be hard for CIOs, IT executives, and manufacturing leaders keep up. IoT will also be hard for R&D.Internet of Things (IoT) designs mesh together several design domains in order to successfully develop a product. Individually, these design domains are challenging. Bringing them all together to create an IoT product can place extreme pressure on design teams. It’s still pretty darn tedious to get all these things connected, and there’s all these standards battles coming on. The rise of the Internet of Things and Web services is driving new design principles as Web services from companies such as Amazon, Facebook and Uber are setting new standards for user experiences. Designers should think about building their products so they can learn more about their users and be flexible in creating new ways to satisfy them – but in such way that the user’s don’t feel that they are spied on what they do.

Subthreshold Transistors and MCUs will be hot in 2016 because Internet of Things will be hot in 2016 and it needs very low power chips. The technology is not new as cheap digital watches use FETs operating in the subthreshold region, but decades digital designers have ignored this operating region, because FETs are hard to characterize there. Now subthreshold has invaded the embedded space thanks to Ambiq’s new Apollo MCU. PsiKick Inc. has designed a proof-of-concept wireless sensor node system-chip using conventional EDA tools and a 130nm mixed-signal CMOS that operates with sub-threshold voltages and opening up the prospect of self-powering Internet of Things (IoT) systems. I expect also other sub-threshold designs to emerge. ARM Holdings plc (Cambridge, England) is also working at sub- and near-threshold operation of ICs.  TSMC has developed a series of processes characterized down to near threshold voltages (ULP family for ultra low power are processes). Intel will focus on its IoT strategy and next-generation low voltage mobile processors.

FPGAs in various forms are coming to be more widely use use in 2016 in many applications. They are not no longer limited to high-end aerospace, defense, and high-end industrial applications. There are different ways people use FPGA. Barrier of entry to FPGA development have lowered so that even home makers can use easily FPGAs with cheap FPGA development boards, free tools and open IP cores. There was already lots of interest in 2015 for using FPGA for accelerating computations as the next step after GPU. Intel bought Altera in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single packagepossibly available in early 2016. Examples of applications that would be well-suited for use of ARM-based FPGAs, including industrial robots, pumps for medical devices, electric motor controllers, imaging systems, and machine vision systems. Examples of ARM-based FPGAs are such as Xilinx’s Zynq-7000 and Altera’s Cyclone V intertwine. Some Internet of Things (IoT) application could start to test ARM-based field programmable gate array (FPGA) technology, enabling the hardware to be adaptable to market and consumer demands – software updates on such systems become hardware updates. Other potential benefits would be design re-use, code portability, and security.

crystalball

The trend towards module consolidation is applicable in many industries as the complexity of communication, data rates, data exchanges and networks increases. Consolidating ECU in vehicles is has already been big trend for several years, but the concept in applicable to many markets including medical, industrial and aerospace.

It seems to be that AXIe nears the tipping point in 2016. AXIe is a modular instrument standard similar to PXI in many respects, but utilizing a larger board format that allows higher power instruments and greater rack density. It relies chiefly on the same PCI Express fabric for data communication as PXI. AXIe-1 is the uber high end modular standard and there is also compatible AXIe-0 that aims at being a low cost alternative. Popular measurement standard AXIe, IVI, LXI, PXI, and VXI have two things in common: They each manage standards for the test and measurement industry, and each of those standards is ruled by a private consortium. Why is this?  Right or wrong, it comes down to speed of execution.

These days, a hardware emulator is a stylish, sleek box with fewer cables to manage. The “Big Three” EDA vendors offer hardware emulators in their product portfolios, each with a distinct architecture to give development teams more options. For some offerings emulation has become a datacenter resource through a transaction-based emulation mode or acceleration mode.

LED lighting is expected to become more intelligent, more beautiful, more affordable in 2016. Everyone agrees that the market for LED lighting will continue to enjoy dramatic year-on-year growth for at least the next few years. LED Lighting Market to Reach US$30.5 Billion in 2016 and Professional Lighting Markets to See Explosive Growth. Some companies will win on this growth, but there are also losers. Due currency fluctuations and price slide in 2015, end market demands in different countries have been much lower than expected, so smaller LED companies are facing financial loss pressures. The history of the solar industry to get a good sense of some of the challenges the LED industry will face. Next bankruptcy wave in the LED industry is possible. The LED incandescent replacement bulb market represents only a portion of a much larger market but, in many ways, it is the cutting edge of the industry, currently dealing with many of the challenges other market segments will have to face a few years from now. IoT features are coming to LED lighting, but it seem that one can only hope for interoperability

 

 

Other electronics trends articles to look:

Hot technologies: Looking ahead to 2016 (EDN)

CES Unveiled NY: What consumer electronics will 2016 bring?

Analysts Predict CES 2016 Trends

LEDinside: Top 10 LED Market Trends in 2016

 

961 Comments

  1. Tomi Engdahl says:

    Smart FPGA Debugging Tools Reduce Validation Times
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329879&

    Designers who focus more heavily on an FPGA’s debug capabilities before they select their next device can reduce development cycles and costs while significantly speeding time to market.

    FPGA and SoC designers face many challenges to get their product into production. Typically, they start by evaluating the appropriate device for their design; then they implement the hardware description language (HDL) design, fit the device, and — finally — debug the entire FPGA before it can be brought into production.

    The reality today is that, for many designs, particularly those in the industrial and embedded markets, any number of FPGAs will work. In most cases, the decision as to which FPGA vendor to choose comes down to experience with the development software.

    Debug basics — logic analyzers
    Every major FPGA vendor offers a logic analyzer as a debug tool. This is a technique that uses internal FPGA logic elements and embedded block memory to implement functions. A designer can specify which signals to monitor and set up a trigger to tell the logic analyzer when to start capturing data.

    It is important to note that, because these signals need to be sampled, they are not capturing the real-time performance of the data.

    Next-generation debug tools
    Because of the limitations of logic analyzers for debugging, new debug tools have been designed to speed up FPGA and board validation. Some EDA vendors are offering tools that integrate logic analyzer functionality into the synthesis tool to shorten the iteration of bug finding. When the logic analyzer is integrated with the synthesis tool, this allows technology views into the design and easier trigger setup. A designer can also make design changes and have them automatically mapped back to the register transfer level (RTL) code.

    FPGA designers typically spend about 30 percent or more of their time in debug. This percentage could go even higher depending on the size and state of the project. Debugging can be painful because it involves many iterative cycles with limited observability and controllability, frequent re-runs of place-and-route, timing closure, and re-programming. By leveraging smart debugging tools, engineers can validate their FPGA designs much faster than just using a traditional inserted logic analyzer.

    Enhanced debug capabilities are a game-changing development for FPGA designers.

    Reply
  2. Tomi Engdahl says:

    The distribution sector is seen after a long time major acquisition, the Swiss Dätwyler buy an English Premier Farnell. The purchase price is 615 million pounds, or about EUR 775 million Euro.
    Boards of both companies have approved the deal.

    This transaction creates the technical components which sells and provides engineering services with 4900 employees, and which stocks are more than a million products.

    Dätwyler Group wants to grow into a business enterprise three billion euros by 2020.

    In Finland, Dätwyler is primarily known as ELFA owner.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4591:raspberry-pi-korttien-valmistaja-myytiin&catid=13&Itemid=101

    Reply
  3. Tomi Engdahl says:

    Dutch NXP sells standard circuits business (11000 employees) to a Chinese consortium in a $ 2.75 billion trade total.
    Buyers are JAC Capital (Beijing Jianguang Asset Management Company) and Wise Road Capital.
    NXP’s standard include Discrete circuits, simple logic circuits and power MOSFET.

    The transaction, NXP loses weight by about a fifth. Standard components, turnover has been on an annual basis of approximately $ 1.2 billion.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4592:nxp-luopuu-standardipiireista&catid=13&Itemid=101

    Reply
  4. Tomi Engdahl says:

    FD SOI Benefits Rise at 14nm
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329887&

    Consultant Handel Jones makes the case that companies should move rapidly to 14nm fully depleted silicon-on-insulator (FD SOI) to use the benefits this technology.

    The semiconductor and electronics industries are adapting effectively to the increase in gate costs with scaling below 28nm.

    The migration to 5nm will require the adoption of EUV lithography. While EUV will reduce the number of multiple patterning steps and yield losses from overlay problems, wafer processing costs will increase, which will likely result in increased gate cost. The semiconductor industry can either accept the existing technology roadmaps and try to increase systemic and parametric yields or it can evaluate other options.

    The foundry market at 180nm is still in high-volume production. The 300mm wafer volume at 28nm will be above 150K WPM for the next 10 to 15 years. Consequently, new process technology options can have a lifetime of 20 to 30 years.

    Another technology option other than FinFET is FD SOI. The analysis on the capabilities of FD SOI indicates that its performance and power consumption are comparable to or superior to those of FinFET. While FinFET structures provide benefits for digital designs, there are cost and technical disadvantages in using FinFET structure for high-frequency and analog-centric mixed-signal designs. Applications such as IoT and Wi-Fi combo chips can best be done with FD SOI compared to other technology options.

    The analysis shows the wafer cost of 14nm FD SOI is 7.3% lower than that of 16/14nm FinFETs. The most important factor in the lower cost is the smaller number of masking steps for FD SOI.

    The assumption is that these two technologies have comparable chip sizes, but 14nm FD SOI has 10% higher yield than 16/14nm FinFETs. The gate cost for 14nm FD SOI is 16.6% lower

    Reply
  5. Tomi Engdahl says:

    Big Data Meets Chip Design
    As the volume of data grows, companies are looking at what else they can do with that data.
    http://semiengineering.com/big-data-meets-chip-design/

    The amount of data being handled in chip design is growing significantly at each new node, prompting chipmakers to begin using some of the same concepts, technologies and algorithms used in data centers at companies such as Google, Facebook and GE.

    While the total data sizes in chip design are still relatively small compared with cloud operations—terabytes per year versus petabytes and exabytes—it’s too much to sort through using existing equipment and approaches.

    “You can take many big data approaches to handle this, but there may be a business problem if you do,” said Leon Stok, vice president of EDA at IBM. He said EDA doesn’t have the kind of concentrated volume necessary to drive these kinds of techniques, and typically that problem is made worse because the data is often different between design and manufacturing.

    Reply
  6. Tomi Engdahl says:

    The Future Of Memory
    Experts at the table, part 1: DDR5 spec being defined; new SRAM under development.
    http://semiengineering.com/the-future-of-memory/

    SE: We’re seeing a number of new entrants in the memory market. What are the problems they’re trying to address, and is this good for chip design?

    Greenberg: The memory market is fracturing into High-Bandwidth Memory (HBM), HMC, and even flash on memory bus. DRAM has been around for many years. The others will be less predictable because they’re new.

    Minwell: The challenge is bandwidth. The existing memory interface technologies don’t give us the bandwidth that we need. Along with that, with additional power we’re having to go into stacking. That’s being driven by high-bandwidth memory. But there’s also a need to have embedded SRAM on chip in large enough quantities so there is low latency.

    Reply
  7. Tomi Engdahl says:

    CPU, GPU, or FPGA?
    http://semiengineering.com/cpu-gpu-or-fpga/

    Need a low-power device design? What type of processor should you choose?

    There are advantages to each type of compute engine. CPUs offer high capacity at low latency. GPUs have the highest per-pin bandwidth. And FPGAs are designed to be very general.

    But each also has its limitations. CPUs require more integration at advanced process nodes. GPUs are limited by the amount of memory that can be put on a chip.

    “FPGAs can attach to the same kind of memories as CPUs,” said Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus. “It’s a very flexible kind of chip. For a specific application or acceleration, they can provide improved performance and better [energy] efficiency.”

    Intel Corp.’s $16.7 billion acquisition of Altera, completed late last year, points to the flexible computing acceleration that FPGAs can offer. Microsoft employed FPGAs to improve the performance of its Bing search engine because of the balance between cost and power. But using FPGAs to design a low-power, high-performance device isn’t easy.

    “It’s harder and harder to get one-size-fits-all,” Woo said. “Some design teams start with an FPGA, then turn it into an ASIC to get a hardened version of the logic they put into an FPGA. They start with an FPGA to see if that market grows. That could justify the cost of developing an ASIC.”

    “It’s a cost-performance-power balance,” Woo said. “CPUs are really good mainstays, very flexible.” When it comes to the software programs running on them, “it doesn’t have to be vectorized code.”

    GPUs are much better graphical interfaces. They are more targeted than general-purpose CPUs. And FPGAs straddle multiple markets.

    Reprogrammable and reconfigurable FPGAs can be outfitted for a variety of algorithms, “without going through the pain of designing an ASIC.”

    Programmability, but not everywhere
    FPGAs fall into a middle area between CPUs and GPUs. That makes them suitable for industrial, medical, and military devices, where they have thrived. But even there the lines are beginning to blur.

    “The choice is between low volume, high value,” he notes. “Off-the-shelf silicon is more general purpose than you can want or afford.”

    Rowen adds, “For many of these applications, there are any number of application-specific products, this cellphone app processor or that cellphone app processor.”

    So should designers choose a CPU, GPU, or FPGA? “The right answer, in many cases, is none of the above – it’s an ASSP,” Rowen said. “You need a hybrid or an aggregate chip.”

    The industry is accustomed to integration at the board level, according to Rowen. “Board-level integration is certainly a necessity in some cases,” he said. The downside of that choice is “relatively high cost, high power [consumption].”

    So, what will it be: CPU, GPU, FPGA, ASSP, ASIC? The best answer remains: It depends.

    Reply
  8. Tomi Engdahl says:

    Near-Threshold Computing
    http://semiengineering.com/near-threshold-computing-2/

    A lot of changes had to come together to make near-threshold computing a technology that was accessible to the industry without taking on huge risk.

    The emergence of the Internet of Things (IoT) has brought a lot of attention to the need for extremely low-power design, and this in turn has increased the pressure for voltage reduction. In the past, each new process node shrunk the feature size and lowered the nominal operating voltage. This resulted in a drop in power consumption.

    However, the situation changed at about 90nm in two ways. First, nominal voltage scaling started to flatten, and thus switching power stopped scaling. Second, leakage current became a lot more significant, and for the smaller nodes even became dominant. Both of these made it difficult to continue any significant power reduction for a given amount of computation without incorporating increasing amounts of logic designed to manage and reduce power.

    The introduction of the finFET at 16nm has improved both operating voltage and leakage but there are significantly increased costs with this node, meaning it is not amenable to designs intended for a low-cost market such as the IoT. There do not appear to be any signs that foundries will implement finFETs on older process nodes, though, so other avenues have to be investigated in order to get the necessary reduction in power.

    Power Consumption is a quadratic function of voltage, normally stated as power is proportional to CV²f. As the voltage drops, you get a significant power savings at the expense of performance. Because many IoT designs have a long duty cycle, this is often an acceptable tradeoff, but there are further complications. Total power is a combination of static or leakage power and dynamic power

    the optimal combination of leakage and switching power has to be found

    The second problem is that the process libraries, as released by the foundries, are designed to operate at some nominal voltage and they may not guarantee operation of those devices any lower than 20% below the nominal operating voltage. Beyond that, variation plays a larger role, thus making the design process a lot more difficult.

    Another problem has been memory. While logic can be scaled without too much difficulty, memories, and SRAM in particular, require higher voltages for read write operations to be reliable even though they can be dropped to lower voltages for retention. This has made near-threshold computing difficult.

    A final area is that EDA tools have not been optimized for this type of design

    There is no design today that could be done without a heavy dependence on EDA tools, and all of these tools rely on models. “Having an accurate transistor model is the key starting point for all tools,”

    All designs have to deal with a certain amount of manufacturing variability, but operating near the threshold voltage amplifies many of these effects.

    Hingarh says that because of this variability, “designers must think very carefully about variation-tolerance in their circuits.” The alternative is to add larger margins, but this removes some of the gain of going towards the threshold voltage as it unnecessarily increases the leakage component. “These timing errors become more frequent in smaller gate lengths such as below 40nm CMOS, where process variations are high, but 65nm or 130nm CMOS technology can be used more successfully. In addition, FD-SOI or CMOS technology with back bias capability can help because back biasing can be used to reduce Vt variation.”

    While it may be the IoT that is spurring development, there are other potential markets that could benefit from this technology. Vinod Viswanath, R&D director at Real Intent, pointed to neuromorphic computing. “The term ‘neuromorphic’ is used to describe mixed analog/digital VLSI systems that implement computational models of real neural systems. These systems directly exploit the physics of silicon and CMOS to implement the physical processes that underlie neural computation.”

    Reply
  9. Tomi Engdahl says:

    Home> Power-management Design Center > How To Article
    Reliability and MTBF: We think we know what we mean, but do we?
    http://www.edn.com/design/power-management/4442188/Reliability-and-MTBF—We-think-we-know-what-we-mean-but-do-we-?_mc=NL_EDN_EDT_EDN_today_20160616&cid=NL_EDN_EDT_EDN_today_20160616&elqTrackId=f4636356f4ce4443a3db38bd4cfd8928&elq=c5a535eafa744991bb6c5281e0276f65&elqaid=32702&elqat=1&elqCampaignId=28566

    Reliability is the probability that an individual unit of a product, operating under specified conditions, will work correctly for a specified period of time.

    This naturally leads us to thinking about when the product stops working, i.e., when it fails for whatever reason. Product failures can occur at any time but they are not totally random. This is why, if you measure the individual lifetime for a large enough sample of products, you will typically get the classic “bathtub” result when plotting failure rate against time. The reason for this is that products experience early life “infant mortality” and they also wear out as they age.

    Taking the inverse of failure rate, 1/ λ , we also get the mean time to failure (MTTF) or the slightly less correct but more commonly used term MTBF (mean time between failures).

    A product whose intrinsic failure rate is 1 in a million (i.e. 10 -6) failures per hour has, by definition, an MTBF of one million hours. However the probability of it lasting 1 million hours (i.e. x=1 on the graph) is just 36.7%, which pretty much scotches any false assumption that MTBF equates to expected life. Indeed, further inspection of the graph shows that the probability of surviving more than 500,000 hours is only just over 60%, while a more respectable 90% reliability figure only equates to 100,000 hours.

    All this serves to emphasize the importance of treating data such as MTBF with caution.

    Reply
  10. Tomi Engdahl says:

    Digital isolators lower component count
    http://www.edn.com/electronics-products/other/4442158/Digital-isolators-lower-component-count?_mc=NL_EDN_EDT_EDN_today_20160613&cid=NL_EDN_EDT_EDN_today_20160613&elqTrackId=23297872f46c42b7b26371707918088f&elq=521a53d5a260474eb12671d5746e90dc&elqaid=32645&elqat=1&elqCampaignId=28511

    Maxim’s MAX14933 and MAX14937 bidirectional isolators transfer digital signals between circuits with robust galvanic isolation between two power domains, while using fewer components and saving board space. The two-channel, open-drain devices provide a maximum bidirectional data rate of 3.4 Mbps and can replace optocouplers in many applications, offering the same isolation capabilities and consuming less power.

    Reply
  11. Tomi Engdahl says:

    Home> Power-management Design Center > How To Article
    Single-channel power supply monitor with remote temperature sense, Part 1
    http://www.edn.com/design/power-management/4442154/Single-channel-power-supply-monitor-with-remote-temperature-sense–Part-1?_mc=NL_EDN_EDT_EDN_today_20160613&cid=NL_EDN_EDT_EDN_today_20160613&elqTrackId=083c5f32848a4cec9a97ea2ab7869f03&elq=521a53d5a260474eb12671d5746e90dc&elqaid=32645&elqat=1&elqCampaignId=28511

    The LTC2970 is a programmable device, containing registers to control the IDAC current, and to report the ADC readings, but the external temperature calculation must be done in software

    Reply
  12. Tomi Engdahl says:

    Graphene Optical Boom Emits Light with No Diode
    http://hackaday.com/2016/06/19/graphene-optical-boom-emits-light-with-no-diode/

    When a supersonic aircraft goes faster than the speed of sound, it produces a shockwave or sonic boom. MIT researchers have found a similar optical effect in graphene that causes an optical boom and could provide a new way to convert electricity into light.

    The light emission occurs due to two odd properties of graphene: first, light gets trapped on the surface of graphene, effectively slowing it down. In addition, electrons pass through at very high speeds.

    Graphene Provides a New Way to Turn Electricity Into Light
    http://scitechdaily.com/graphene-provides-a-new-way-to-turn-electricity-into-light/

    By slowing down light to a speed slower than flowing electrons, scientists have developed a new way to turn electricity into light.

    When an airplane begins to move faster than the speed of sound, it creates a shockwave that produces a well-known “boom” of sound. Now, researchers at MIT and elsewhere have discovered a similar process in a sheet of graphene, in which a flow of electric current can, under certain circumstances, exceed the speed of slowed-down light and produce a kind of optical “boom”: an intense, focused beam of light.

    This entirely new way of converting electricity into visible radiation is highly controllable, fast, and efficient, the researchers say, and could lead to a wide variety of new applications. The work is reported in the journal Nature Communications,

    Efficient plasmonic emission by the quantum Čerenkov effect from hot carriers in graphene
    http://www.nature.com/ncomms/2016/160613/ncomms11880/full/ncomms11880.html

    Reply
  13. Tomi Engdahl says:

    Printed Sensors to Steal Spotlight at Sensors Expo
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329937&
    Low-cost printed/flexible/stretchable sensors are the real stars of next week’s Sensor Expo.

    Printed sensors, paste-on transducers that read temperatures and moisture states from the surface of your skin, will get new scrutiny at this month’s Sensors Expo Conference (June 21-23 at the McEnery Convention Center San Jose, CA).

    Sensors are electrical transducers that interact with the environment to help us examine and evaluate everything from human cancer cells to jet engine pressures. Sensors must be considered part of the IoT network and are included in the architecture and power budget of microcontrollers, memory, and IoT module packaging.

    New-generation PFS (printed/flexible/stretchable) sensors offer new levels of sensitivity, mobility and manufacturability, says consultant Roger Grace (president of Roger Grace Associates), who has organized and will chair a day-long symposium on the subject, June 21, a day before Sensors Expo’s formal opening.

    “Flexible Electronics” refers to devices that can be bent, folded, stretched or conformed — without losing their functionality — says Grace. PFS sensors are estimated to be $6 billion of the $340 billion flexible electronics market by 2030.”

    Parameters that can be measured with PFS sensors include

    Chemical/gas composition
    Touch/force/pressure
    Temperature
    Humidity
    Bio-medical data
    Airflow
    Imaging
    Conductive electrodes

    But the special value of PFS sensors, insists Grace, is that they can be manufactured at very low cost. The challenge for PFS developers, he reminds, is “integration,” coupling this wearable technology with other sensors, with intelligence, and with burgeoning Internet of Things (IoT).

    Reply
  14. Tomi Engdahl says:

    Home> Community > Blogs > Day in the Life of a Chip Designer
    Use VUnits for assertions & functional coverage
    http://www.edn.com/electronics-blogs/day-in-the-life-of-a-chip-designer/4442168/Use-VUnits-for-assertions—functional-coverage?_mc=NL_EDN_EDT_EDN_today_20160615&cid=NL_EDN_EDT_EDN_today_20160615&elqTrackId=91b19f60b6c1499d977d78e295b919c7&elq=3de2f41eeb354d4897926f7f757f0c1a&elqaid=32681&elqat=1&elqCampaignId=28550

    Assertions have been key contributors in increasing confidence in the accuracy of the design & quality of verification since coding effective coverage is fundamental in ensuring the completeness of verification. In a typical verification environment, we restrict our access mostly to the input/output ports of the modules for coding assertions. With increasing complexity in SoC designs, the hierarchies of modules have increased considerably. As a result, it is an increasingly tedious and error-prone task to write assertions and functional coverage on signals which are deep within the hierarchy, or a part of the module which is instantiated multiple times.

    This article discusses an alternative and effective way to code assertions and functional coverage and overcome the illustrated challenges.

    Reply
  15. Tomi Engdahl says:

    Fix poor capacitor, inductor, and DC/DC impedance measurements
    http://www.edn.com/electronics-blogs/impedance-measurement-rescues/4442173/Fix-poor-capacitor–inductor–and-DC-DC-impedance-measurements?_mc=NL_EDN_EDT_EDN_today_20160615&cid=NL_EDN_EDT_EDN_today_20160615&elqTrackId=2c2bfe56c60949b69f7745b0b9c8ca03&elq=3de2f41eeb354d4897926f7f757f0c1a&elqaid=32681&elqat=1&elqCampaignId=28550

    When designing or optimizing a VRM (voltage Regulation Module), we need its output impedance data and impedance data for the filter inductors and capacitors for us to have complete simulation models. Unfortunately, vendor data on these components is often incomplete, erroneous, or difficult to decipher in terms of the setup involved to make the measurement. Thus, we often need to collect the data ourselves.

    The measurements need to be performed over the entire frequency range of interest, typically from a few kiloHertz to about 1 GHz, depending on the application. Because of this very wide frequency range, we generally turn to S-parameter based measurements. High performance simulators can directly incorporate the S-parameter component measurements in AC, DC, transient, and harmonic balance simulations while including the finite element PCB models.

    While extremely useful, standard S-parameter measurements frequently aren’t sufficient. What’s really needed is an extended range, that is, a partial S2p measurement.

    Reply
  16. Tomi Engdahl says:

    Use VUnits for assertions & functional coverage
    http://www.edn.com/electronics-blogs/day-in-the-life-of-a-chip-designer/4442168/Use-VUnits-for-assertions—functional-coverage?_mc=NL_EDN_EDT_EDN_today_20160615&cid=NL_EDN_EDT_EDN_today_20160615&elqTrackId=91b19f60b6c1499d977d78e295b919c7&elq=3de2f41eeb354d4897926f7f757f0c1a&elqaid=32681&elqat=1&elqCampaignId=28550

    Assertions have been key contributors in increasing confidence in the accuracy of the design & quality of verification since coding effective coverage is fundamental in ensuring the completeness of verification. In a typical verification environment, we restrict our access mostly to the input/output ports of the modules for coding assertions. With increasing complexity in SoC designs, the hierarchies of modules have increased considerably. As a result, it is an increasingly tedious and error-prone task to write assertions and functional coverage on signals which are deep within the hierarchy, or a part of the module which is instantiated multiple times.

    This article discusses an alternative and effective way to code assertions and functional coverage and overcome the illustrated challenges. It shows the usage of VUnits as a replacement of the conventional way of coding module-based assertions and linking it with the design through ports.

    Reply
  17. Tomi Engdahl says:

    Distributed fiber-optic sensors market forecast to 2025
    http://www.cablinginstall.com/articles/pt/2016/06/distributed-fiber-optic-sensors-market-forecast-to-2025.html?cmpid=Enl_CIM_CablingNews_June202016&eid=289644432&bid=1437580

    As optical networks are used for transmitting of voice and data signals around the world, these networks require perpetual monitoring so as to ensure proper transmission of signal along the fibers. Fiber-optic sensors are quite immune to electromagnetic interference, and being a poor conductor of electricity they can be used in places where there is flammable material such as jet fuel or high voltage electricity.

    Fiber-optic sensors can be designed to withstand high temperatures as well. Most physical properties can be sensed optically with fiber-optic sensors. Temperature, light intensity, displacement, pressure, rotation, strain, sound, magnetic field, electric field, chemical analysis, radiation, flow, liquid level and vibration are just some of the phenomena that can be detected via these sensors.

    Reply
  18. Tomi Engdahl says:

    FCC to Investigate Raised RF Noise Floor
    http://hackaday.com/2016/06/21/fcc-to-investigate-raised-rf-noise-floor/

    If you stand outside on a clear night, can you see the Milky Way? If you live too close to a conurbation the chances are all you’ll see are a few of the brighter stars, the full picture is only seen by those who live in isolated places. The problem is light pollution, scattered light from street lighting and other sources hiding the stars.

    The view of the Milky Way is a good analogy for the state of the radio spectrum. If you turn on a radio receiver and tune to a spot between stations, you’ll find a huge amount more noise in areas of human habitation than you will if you do the same thing in the middle of the countryside. The RF noise emitted by a significant amount of cheaper modern electronics is blanketing the airwaves and is in danger of rendering some frequencies unusable.

    If you have ever designed a piece of electronics to comply with regulations for sale you might now point out that the requirements for RF interference imposed by codes from the FCC, CE mark etc. are very stringent, and therefore this should not be a significant problem. The unfortunate truth is though that a huge amount of equipment is finding its way into the hands of consumers which may bear an FCC logo or a CE mark but which has plainly had its bill-of-materials cost cut to the point at which its compliance with those rules is only notional. Next to the computer on which this is being written for example is a digital TV box from a well-known online retailer which has all the appropriate marks, but blankets tens of megahertz of spectrum with RF when it is in operation. It’s not faulty but badly designed, and if you pause to imagine hundreds or thousands of such devices across your city you may begin to see the scale of the problem.

    This situation has prompted the FCC Technological Advisory Council to investigate any changes to the radio noise floor to determine the scale of the problem.

    FCC Technological Advisory Council Initiates Noise Floor Inquiry
    http://www.arrl.org/news/fcc-technological-advisory-council-initiates-noise-floor-inquiry

    Reply
  19. Tomi Engdahl says:

    Researchers Develop Reliable Rechargable Zinc Battery
    Redesigned battery doesn’t short, targets grid
    http://www.eetimes.com/document.asp?doc_id=1329950&

    Researchers from Stanford University and Toyota have developed a novel battery design for grid-scale energy storage. The battery has electrodes made of zinc and nickel, inexpensive metals that are available commercially. Few zinc batteries are reliably rechargeable, however.

    Tiny fibers called dendrites that form on the zinc electrode during charging can eventually reach the nickel electrode, causing the battery to short circuit and fail. Researchers solved the dendrite problem by redesigning the battery.

    Reply
  20. Tomi Engdahl says:

    Rambus Prototypes 2x2mm Lens-Less Eye Tracker for Headmount Displays
    http://www.eetimes.com/document.asp?doc_id=1329956&

    At last week’s VLSI Symposia, Rambus presented a poster titled “Lensless Smart Sensors: Optical and Thermal Sensing for the Internet of Things” in which the company not only detailed the underlying technology but also demonstrated a working sensor prototype.

    The Lensless Smart Sensors (LSS) rely on a phase anti-symmetric diffraction grating (either tuned for optical or IR thermal sensing) mounted directly on top of a conventional imaging array and co-designed with computational algorithms that extract the relevant information from the scene to be imaged. The grating is very thin and boasts a wide field of view, up to 120º, and the resulting imaging sensor is almost flat (only a few hundred micrometres separate the grating from the image sensor).

    The raw sensed image is encoded by the grating structure, calling for dedicated reconstruction algorithms and image processing, but in some applications such as range-finding or eye-tracking, it may not even be necessary to reconstruct a full image. Instead, extracting distance measurements may suffice and the particular phase anti-symmetric diffraction structure makes it very simple, explains the poster.

    Reply
  21. Tomi Engdahl says:

    Software Defined Power
    http://www.cui.com/software-defined-power?utm_source=iContact&utm_medium=email&utm_campaign=*Customer%20Newsletter&utm_content=CUI+Power+Update%3A+June+2016

    Connectivity provides the catalyst for transforming power supplies from isolated islands into elements of a true power ecosystem at the heart of the Software Defined Power architecture. CUI’s PMBus™ compatible front-end ac-dc power supplies, advanced bus converters and digital POLs provide a critical enabler for designers building Software Defined Power architectures for datacom, storage, networking, and telecommunications applications.

    Reply
  22. Tomi Engdahl says:

    USB oscilloscopes add waveform generators
    http://www.edn.com/electronics-products/electronic-product-reviews/other/4442233/USB-oscilloscopes-add-waveform-generators?_mc=NL_EDN_EDT_EDN_productsandtools_20160620&cid=NL_EDN_EDT_EDN_productsandtools_20160620&elqTrackId=1be0a98e2655437eb13b580a89935709&elq=55b87f4312d646308ca4793840e26f7f&elqaid=32748&elqat=1&elqCampaignId=28604

    It’s not uncommon to see USB and bench oscilloscopes that include an AWG (arbitrary waveform generator). Generally, oscilloscopes will use either a traditional sampling architecture or a DDS (direct-digital synthesis) architecture. TiePie Engineering, a manufacturer of USB oscilloscopes, claims to have something slightly different.

    TiePie Engineering’s latest model, HS5-540, uses what the company calls CDS (Constant Data Size) architecture. It’s closer to a traditional sampling AWG than to a DDS architecture. But, it differs in that its clock can change frequency to change sampling rate up to 240 Msamples/s.

    “Our clock can be set at any frequency and can be changed with very small steps and has very little jitter, making it possible to generate very accurate and stable signals.” Jitter is specified as less than 50 ps rms.

    http://www.tiepie.com/en/products/Oscilloscopes/Handyscope_HS5

    Reply
  23. Tomi Engdahl says:

    Wake up and listen: Vesper quiescent-sensing MEMS device innovation
    http://www.edn.com/electronics-products/electronic-product-reviews/other/4442252/-Wake-up-and-listen–Vesper-Quiescent-Sensing-MEMS-Device-innovation?_mc=NL_EDN_EDT_EDN_analog_20160623&cid=NL_EDN_EDT_EDN_analog_20160623&elqTrackId=d9d2a455ebe643cdb9a17e3a84e57f91&elq=631c228ccb254eca8cd79952b15c7565&elqaid=32798&elqat=1&elqCampaignId=28647

    I have a strong belief that the most natural and efficient way to communicate with devices, in the coming of the Internet of Things (IoT), is the human voice. The primary element for this effort to be successful is the microphone and the primary features needed in such a system are low power, small size, rugged construction, and excellent signal-to-noise capability.

    Designers, take heart! Your solution is here from Vesper, a privately held piezoelectric MEMS company

    Vesper has now demonstrated the first commercially available quiescent-sensing MEMS device, providing designers the possibility of acoustic event-detection devices at virtually zero power draw at just 3 µA of current while in listening mode. This piezoelectric MEMS microphone — VM1010 — will allow designers to advance voice and acoustic event monitoring in their systems.

    Matt Crowley, Vesper CEO told me that this quiescent-sensing MEMS microphone is the only device that uses sound energy itself to wake a system from full power-down. It is known that even when fully powered-off, batteries in smartphones and smart speakers naturally dissipate 40-80uA, which is far more current than this device needs.

    Even in sleep mode, this microphone preserves its very high signal-to-noise ratio (SNR)

    This microphone employs a rugged piezoelectric transducer that is immune to dust, water, oils, humidity, particles and other environmental contaminants, making it ideal for deployments outdoors or in kitchens and automobiles.

    http://vespermems.com/

    Vesper uses piezoelectric materials to create the most reliable and advanced MEMS microphones on the market. This is a major leap over the capacitive MEMS microphones that have dominated the market for over a decade.

    Reply
  24. Tomi Engdahl says:

    A checklist for designing RF-sampling receivers
    http://www.edn.com/design/analog/4442261/A-checklist-for-designing-RF-sampling-receivers?_mc=NL_EDN_EDT_EDN_analog_20160623&cid=NL_EDN_EDT_EDN_analog_20160623&elqTrackId=8bce205bad954d1a924d0f7c40062b31&elq=631c228ccb254eca8cd79952b15c7565&elqaid=32798&elqat=1&elqCampaignId=28647

    Home> Analog Design Center > How To Article
    A checklist for designing RF-sampling receivers
    Thomas Neu -June 22, 2016

    inShare
    Save Follow
    PRINT
    PDF
    EMAIL

    The modern, advanced CMOS direct radio frequency (RF)-sampling data converter has been eagerly awaited by system design engineers for several major end-equipment manufacturers. This includes manufacturers of communications infrastructure, software-defined radios (SDRs), radar systems, or test and measurement products. Recently introduced data converters are delivering the high dynamic range comparable to high-performance intermediate frequency (IF)-sampling data converters. Additionally, these converters integrate on-chip digital filtering (DDC), which reduces the output data rate from 3-4 GSPS sampling rate to something more manageable similar to traditional IF-sampling data converters.

    Two major factors are driving the quick adoption of these ultra-high-speed data converters. The ever increasing demand for wider bandwidth naturally requires faster sampling rates, while higher density and integration is accomplished by removing one down conversion stage from the receiver, for example. Modern SDRs or cellular base stations need to be able to cover multiple frequency bands simultaneously, for example, to support carrier aggregation across multiple licensed Long-Term Evolution (LTE) bands to enable faster data traffic. Rather than expending one radio-per-band system, designers want to shrink the product form factor and build a multiband- capable radio. The RF sampling data converter removes the intermediate frequency (IF) stage saving printed circuit board (PCB) area and power consumption, while its wide Nyquist zone enables sampling multiple bands simultaneously.

    System designers who are considering switching from IF- to RF-sampling need to solve four primary challenges on their checklist:

    Receiver sensitivity
    Radio performance in presence of in-band interferer
    Filter requirements for out-of-band blocker
    Performance of the sampling clock source

    One basic performance metric of the receiver is its sensitivity, which means what is the weakest signal power that it can successfully recover and process.

    In-band blocking performance

    Sometimes interferers manage to get within the front-end filter passband. The receiver in-band blocking performance is a measure of how well the receiver can demodulate weak signals in the presence of such an in-band interferer. The automatic gain control (AGC) of the receiver ensures that the interferer power level stays below the ADC input full scale to avoid saturation.

    Independent of architecture, the ADC input must be protected from large, out-of-band interferers because that would either alias the in-band to exceed the ADC full scale and saturate the receiver, or generate harmonics that would overlap with a small, in-band wanted signal.

    Intermediate frequency-sampling systems have a relatively small Nyquist zone, therefore, the alias bands and mixing images are fairly close by.

    Summary

    The availability of high dynamic range RF-sampling converters, such as the ADC32RF45, enables direct RF-sampling receiver implementations for a wide range of applications. When transitioning from a traditional heterodyne design to a direct RF conversion, the designer should not have to compromise on radio performance. However, attention still needs to be given to the four major design challenges

    Reply
  25. Tomi Engdahl says:

    High Temperature SiC Power MOSFETs
    https://www.eeweb.com/news/high-temperature-sic-power-mosfets

    TT Electronics introduced a Silicon Carbide (SiC) power MOSFET that is designed for high temperature, power efficiency applications with a maximum junction temperature of +225°C. As a result of this operating potential, the package has a higher ambient temperature capability and can therefore be used in applications, including distribution control systems with greater environmental challenges, such as those in close proximity to a combustion engine.

    Supplied in a high power dissipation, low thermal resistance, fully hermetic, ceramic SMD1 package the 25A, 650V rated SML25SCM650N2B also ensures faster switching and low switching losses in comparison to normal Si type MOSFETs, consequently the size of the passive components in the circuit can be reduced resulting in weight and space saving benefits.

    Reply
  26. Tomi Engdahl says:

    Next Challenge: Contact Resistance
    http://semiengineering.com/next-device-challenge-contact-resistance/

    New materials and tools are needed to solve an issue no one worried about in the past.

    In chip scaling, there is no shortage of challenges. Scaling the finFET transistor and the interconnects are the biggest challenges for current and future devices. But now, there is another part of the device that’s becoming an issue—the contact.

    Typically, the contact doesn’t get that much attention, but the industry is beginning to worry about the resistance in the contacts, or contact resistance, in leading-edge chip designs.

    The contact is a tiny and distinct structure, which connects the transistor with the first layer of copper interconnects in a device. A chip can have billions of contacts. For example, Apple’s latest A9 application processor consists of 2 billion transistors and 10 to 15 metal layers. Based on a 16nm/14nm finFET process, the A9 also has an astounding 6 billion tiny contacts to the transistor.

    The problem is that the contacts are becoming smaller at each node, which in turn is leading to unwanted contact resistance in devices. Resistance represents the difficulty of a current passing through a conductor.

    Reply
  27. Tomi Engdahl says:

    Consolidation Shuffles Automotive IC Vendor Rankings
    http://www.eetimes.com/document.asp?doc_id=1329974&

    Growth in the automotive semiconductor market slowed in 2015 amid currency fluctuations and rampant consolidation that shuffled vendor rankings, according to market research firm IHS Technology Inc.

    The automotive semiconductor market grew 0.2% last year, reaching $29 billion, IHS (Englewood, Colo.) said.

    As reported, NXP Semiconductors NV catapulted to the top of automotive chip suppliers in terms of sales based on its $11.8 billion acquisition of Freescale Semiconductor Inc. NXP’s automotive semiconductor revenue grew by 124% to nearly $4.2 billion, good enough to garner nearly 14.4% of the market, according to IHS.

    Reply
  28. Tomi Engdahl says:

    Zero Power Listen-Mode Sensor Debuts
    Acoustic sensor uses sound energy to wake up a system
    http://www.eetimes.com/document.asp?doc_id=1329967&

    What if your always-on listening device can be activated simply by voice or noise, without even pressing a button? Better yet, what if a microphone inside the device switches but draws practically zero power? An acoustic “event-detection” device like this — with battery life that spans not months but years — could open a myriad new applications both for consumer electronics and a host of industries.

    The thing that makes this scenario possible is “an acoustic sensor that uses sound energy to wake a system from full power-down mode,” explained Matt Crowley, CEO, Vesper.

    So, how has Vesper outsmarted the experts?

    The secret is piezoelectric technology, said Crowley. “I think others weren’t necessarily looking at that.”

    Reply
  29. Tomi Engdahl says:

    System Scaling Key to Semiconductor Progress
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329977&

    System scaling technologies exist but can’t go mainstream until IC designers and manufacturing experts cooperate.

    Advanced semiconductor nodes below 10nm promise tens of billions of gates and memory bits, yet the current strategy of transistor scaling has reached its practical limits for most applications. Only the largest companies are likely to invest in SoCs at 7nm or 5nm where high-volume applications like today’s smartphones offer the potential payback to justify the risks and high development costs in the range of hundreds of millions of dollars.

    System scaling may be the answer. While it is not a new concept, it is taking center stage as the market looks for integration alternatives to traditional transistor scaling. It offers another path to pursuing Moore’s Law by moving the integration focus from the transistor to the system.

    System scaling — often referred to as multi-die IC — encompasses the concept of integrating complex systems at the functional/building block level as opposed to the transistor level.

    Reply
  30. Tomi Engdahl says:

    Near-field scanners let you see EMI
    http://www.edn.com/electronics-blogs/the-emc-blog/4442275/Near-field-scanners-let-you-see-EMI?_mc=NL_EDN_EDT_pcbdesigncenter_20160627&cid=NL_EDN_EDT_pcbdesigncenter_20160627&elqTrackId=fe884364228f495fb87f6ae0c94f763d&elq=0b90576f585040f89084413cbceb4ec0&elqaid=32830&elqat=1&elqCampaignId=28672

    Home> Community > Blogs > The EMC Blog
    Near-field scanners let you see EMI
    Arturo Mediano -June 23, 2016

    inShare38
    Save Follow
    PRINT
    PDF
    EMAIL

    I love near field probes because they let me “see” magnetic and electric fields with an oscilloscope or with a spectrum analyzer. They let locate sources of emissions in board, cables, and systems. Near-field scanners also let you see emissions, particularly all over a board. That’s hard to do with a single probe.

    There are several EMI/EMC scanners on the market today such as those from EMSCAN, DETECTUS, and Api, and others. A scanner is essentially a series of near-field probes placed in a grid. Thus, it can produce an image of a board’s emissions that’s more consistent and repetitive than you can get by manually scanning a board with probes.

    The EMxpert scanner from EMSCAN is one such scanner
    The scanner consists of thousands of loops spaced so that it provides resolution of less than 1 mm. Frequency range goes from 50 kHz to 8 GHz, depending on the model. The loop antennas are sensitive down to -135 dBm and a high-speed electronic switching system provides real-time analysis in less than 1 s.

    EMI scanners let you quickly analyze and compare design iterations and optimize hardware design. I use them for troubleshooting and for teaching.

    A spectral scan lets us identify signals from the board, which often come from oscillators and clocks. Signals may be parasitic oscillations or ringing, which are difficult to prevent. With the spectral scan, we can measure any signal from the board.

    With a spectral scan and spatial scan, you can identify the current path for that signal, critical information if you want to minimize EMI/EMC and SI problems.

    A typical question when doing the review of a product is: “Did you use a decoupling capacitor?” Usually, the response is something like: “Of course, I have a 100 nF capacitor.” That’s not enough.

    Sometimes, you have a capacitor (or decoupling circuit) in your system but there is no effective decoupling because terminal impedances don’t match the topology you have chosen, the capacitor technology/value isn’t correct, or parasitic effects in the layout and package are dominant. With a near field scan, you can detect how your decoupling system is really working.

    Reply
  31. Tomi Engdahl says:

    Modular AWGs: How they work and how to use them
    http://www.edn.com/design/test-and-measurement/4442224/Modular-AWGs–How-they-work-and-how-to-use-them?_mc=NL_EDN_EDT_EDN_today_20160623&cid=NL_EDN_EDT_EDN_today_20160623&elqTrackId=adb482fa7eb940d3add219e731872dbd&elq=4976bf49cec34e4d957919930a7ba365&elqaid=32805&elqat=1&elqCampaignId=28654

    The AWG (arbitrary waveform generator), with its near universal selection of waveforms, has become a popular signal source for test systems. Modular AWGs let you add standard or custom waveforms to PCs as part of an automated test station. With an AWG, you can create waveforms by using equations, by capturing waveforms from digitizers or digital oscilloscopes, or you can create your own waveform with manufacturer supplied or third-party software. Waveform sequencing lets you switch among predefined waveforms.

    Today’s modular AWGs offer extended bandwidth, higher sampling rates, and longer waveform memory than previous models. In addition, they offer advanced operating modes and the ability to stream large amounts of waveform data from a PC’s main memory. Before selecting a modular AWG, learn how they work and what they can do.

    Reply
  32. Tomi Engdahl says:

    Single Molecule Detects Light
    http://hackaday.com/2016/06/23/single-molecule-detects-light/

    Everything is getting smaller all the time. Computers used to take rooms, then desks, and now they fit in your pocket or on your wrist. Researchers that investigate light sensors have known that individual diarylethene molecules can exist in two states: one where it conducts electricity and one where it doesn’t. A visible photon causes the molecule to be electrically open and ultraviolet causes it to close. But there’s a problem.

    Placing electrodes on the molecule interferes with the process. Depending on the kind of electrode, the switch will get stuck in the on or off position. Researchers at Peking University in Beijing determined that placing some buffering material between the molecule and the electrodes would reduce the interference enough to maintain correct operation. What’s more the switches remain operable for a year, which is unusually long for this kind of construct.

    Single-molecule switch flipped on and off by light
    http://www.rsc.org/chemistryworld/2016/06/single-photosensitive-molecule-switch-controlled-light

    Researchers have produced a photoswitch comprising just one photosensitive molecule whose electrical conductivity can be turned on and off by light.1 The device may, with further development, have potential in solar energy harvesting and light-sensing applications. It may also be useful in biomedical electronics and optical logic, in which light replaces electrical signals to transmit information.

    Reply
  33. Tomi Engdahl says:

    Brexit Ripples Through Tech Community
    http://www.eetimes.com/document.asp?doc_id=1329996&

    Since the Brexit vote ended with 51.8% in favor of leaving the European Union and 48.2% wishing to remain, the UK will go back to becoming its own sovereign state in a process that the Washington Post notes may take years to complete.

    Britain’s exit from the European Union is expected to impact not only that country, but also the companies that do business there.

    Trip Chowdhry, an analyst with Global Equities Research, noted in a report that these outcomes will likely occur:

    Unrestricted Free Flow of goods, services, materials, and labor from the rest of Europe to Britain will cease.
    Labor costs will go up in Britain.
    There will be an oversupply of certain skills and shortage of others.
    Due to the above, the cost of goods and services will go up, and will make Britain noncompetitive in world markets.
    Britain’s economy will shrink.
    Increase in labor and material costs will reduce or altogether eliminate the profit margins of businesses — thereby reducing the overall value of the businesses based in Britain.
    The net worth of business owners will be reduced — including that of shareholders of public companies. The situation is one factor that might possibly prefigure a stock market collapse.

    Reply
  34. Tomi Engdahl says:

    Carbon Nanotubes: Been There, Done That?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329999&

    A second wave of CNT consolidation is likely because some key markets are expected to take off faster than others and the rewards are not prone to be spread evenly.

    Carbon nanotubes (CNTs) were once the favorite “new kid on the block.” They appeared poised to become a revolutionary material that transformed countless industries and helped us create an elevator to the moon. But this technology subsequently fell off the radar for many as it struggled to make commercial headway. The situation was exacerbated when an even newer kid on the block, graphene, stole away all the attention and money.

    Yet despite all this, CNTs are now making a steady but quiet comeback. Multi-wall carbon nanotubes (MWCNTs) have seen their prices fall to as low as $50/kg for un-purified versions in some quarters, which is helping open-up a range of markets.

    Carbon nanotubes are already a commercial reality. They are used in conductive plastics and batteries. In conductive plastics, a small loading of CNT additives imparts conductivity to otherwise insulating plastics. The low loading makes it cost effective even if CNTs are more expensive than high-end carbon black on a $/kg basis. Furthermore, the long and thin morphology of CNTs gives better electrical results without affecting the mechanical properties.

    Now, a new of wave of applications is emerging. Good progress is being made with regard to the use of carbon nanotubes in moulded 3D-shaped touch-sensitive surfaces.

    Reply
  35. Tomi Engdahl says:

    Now Intel swings axe at sales, marketing peeps
    Top buyers like Lenovo and Acer told to deal direct with Cali mothership
    http://www.theregister.co.uk/2016/06/27/intel_latest_layoffs/

    Intel has turned its axe on sales and marketing staff as part of its ongoing workforce decimation.

    In April, Chipzilla announced it will lay off 12,000 employees worldwide – roughly one in ten of its 107,000 staffers – over the coming months as it weans itself off the dwindling desktop computer market. People working on doomed product lines – such as its Atom mobile processors – were among the first to be let go.

    Reply
  36. Tomi Engdahl says:

    If You Could Have the Ideal Programmable Logic Device, What Would it Be?
    http://www.eetimes.com/author.asp?section_id=28&doc_id=1329998&

    An FPGA advert from 30 years ago really emphasizes the tremendous strides that have been made in programmable logic technology.

    in the May 29, 1986 issue of EDN announcing the availability of the XC-2064 Logic Cell Array. At that time, if you wanted custom logic in your design, you really had only two options — simple programmable logic devices (PLDs) like PROMs at one end of the spectrum and the Gate Array (GA) class of Application-Specific Integrated Circuits (ASICs) at the other. (The moniker “Logic Cell Array” was intended to contrast with “Gate Array” — over time this would transition into the Field-Programmable Gate Array (FPGA) appellation we know and love today).

    The XC-2064 comprised an 8 x 8 = 64 array of Logic Cells, each containing a 4-bit lookup table (LUT) and a flip-flop, along with 38 GPIOs. That was it. There wasn’t even any RAM.

    On the bright side, there was sophisticated (for the time) development software available.

    . The problem at that time was that PLDs were cheap and they were a known quantity. Also, their input-to-output timing was deterministic (it was specified in the datasheet); by comparison, the XC-2064′s timing depended on how you partitioned your logic into the LUTs and how you routed the LUTs together.

    Who could have imagined that FPGAs would evolve from that cuddly little rascal to today’s behemoths with their multi-hundred-megahertz clocks, multi-hundred-megabit RAMs, hundreds of thousands of LUTs, thousands of DSP blocks, thousands of GPIOs, embedded processor cores, high-speed transceivers… the list goes on.

    Reply
  37. Tomi Engdahl says:

    Home> Community > Blogs > Eye on Efficiency
    Battery chargers to become more efficient by 2018
    http://www.edn.com/electronics-blogs/eye-on-efficiency/4442238/Battery-chargers-to-become-more-efficient-by-2018-?_mc=NL_EDN_EDT_EDN_today_20160628&cid=NL_EDN_EDT_EDN_today_20160628&elqTrackId=abf68518ba6b4a5caab14be75d4be921&elq=6aaeebdd0dd347a19c400c52d54c1e85&elqaid=32846&elqat=1&elqCampaignId=28687

    Early this month, the U.S. Department of Energy (DOE) published a final rulemaking describing the country’s first Energy Conservation Standards for Battery Chargers (BCs). It’s been a long time coming – the Department first proposed BC efficiency requirements in early 2012.

    The charger regulation is based on a single metric, Unit Energy Consumption (UEC). It limits the annual energy consumption for 7 different classes of BCs. Expressed as a function of battery energy (Ebatt), a charger’s UEC reflects the “non-useful” energy consumed in all modes of operation (i.e. the amount of energy consumed but not transferred to the battery as a result of charging).

    The DOE decided to exclude “back-up” battery chargers and uninterruptible power supplies (UPSs) from this ruling because of specific product testing issues. The Department is currently working on a separate rulemaking for UPSs.

    2016-04-29 ENERGY CONSERVATION PROGRAM: TEST PROCEDURES FOR UNINTERRUPTIBLE POWER SUPPLY
    http://energy.gov/eere/buildings/downloads/2016-04-29-energy-conservation-program-test-procedures-uninterruptible

    Reply
  38. Tomi Engdahl says:

    The Road To 5nm
    http://semiengineering.com/the-road-to-5nm/

    Why the next couple nodes will redefine the semiconductor industry.

    There is strong likelihood that enough companies will move to 7nm to warrant the investment. How many will move forward to 5nm is far less certain.

    Part of the reason for this uncertainty is big-company consolidation. There are simply fewer customers left who can afford to build chips at the most advanced nodes. Intel bought Altera. Avago bought Broadcom. NXP bought Freescale. GlobalFoundries acquired IBM’s microelectronics business unit. While these acquisitions provide synergies and capabilities that didn’t exist in-house for the acquiring companies, they don’t necessarily result in twice as much volume at the leading-edge.

    Moreover, the benefits of shrinking features are not as obvious as in the past. They used to be a guarantee of better performance and lower power at a lower cost. That’s no longer the case. Cost per transistor is increasing, and there is no indication that will change.

    Cisco, IBM, Apple and even Intel are all exploring different approaches to improve throughput to memory, including new communications schemes across multiple dies in a package and new memory types. Whether that happens at 28nm or 16/14nm is less important than how memories are accessed, how much flexibility is built into architectures, and whether physical effects can be adequately addressed.

    Reply
  39. Tomi Engdahl says:

    Never-never chip tech Memristor shuffles closer to death row
    Execution warrant close to being signed for Fink’s folly
    http://www.channelregister.co.uk/2016/06/28/memristor_moves_closer_to_death_row/

    The Memristor always was a rich company’s technology toy, but Meg Whitman wants HPE to be lean and mean, not fat and wasteful, with HPE Labs producing blue sky tech that rarely becomes a product success.

    Memristor was first reported by HPE Labs eight years ago, as a form of persistent memory. At the time HP Labs Fellow R. Stanley Williams compared it to flash: “It holds its memory longer. It’s simpler. It’s easier to make – which means it’s cheaper – and it can be switched a lot faster, with less energy.”

    Unfortunately it isn’t simpler to make and still isn’t here. NVMe SSDs have boosted flash’s data access speed, reducing the memory-storage gap, and Intel/Micron’s 3D XPoint SSDs will arrive later this year as the first viable productised technology to fill that gap.

    WDC’s SanDisk unit is working on ReRAM technology for its entry into storage-class memory hardware, and HPE has a partnership with SanDisk over its use.

    SanDisk foundry partner Toshiba has a ReRAM interest.

    WDC’s HGST unit has been involved with Phase Change Memory.

    IBM has demonstrated a 3bits/cell Phase Change Memory (PCM) technology.

    Samsung has no public storage-class memory initiative, although it has been involved in STT-RAM .

    The problem for HPE with Memristor is that it would need volume manufacturing to get the cost down.

    Reply
  40. Tomi Engdahl says:

    The EAGLE has landed: at Autodesk!
    http://hackaday.com/2016/06/29/the-eagle-has-landed-at-autodesk/

    The selloffs continue at Farnell! We’d previously reported that the UK distributor of electronics parts was being sold to a Swiss distributor of electronics parts. Now it looks like they’re getting rid of some of their non-core businesses, and in this case that means CadSoft EAGLE, a popular free-for-limited-use PCB layout suite.

    But that’s not the interesting part: they sold EAGLE to Autodesk!

    Autodesk had a great portfolio of professional 3D-modeling tools, and has free versions of a good number of their products. (Free as in beer. You don’t get to see the code or change it.)

    What does this mean for those of you out there still using EAGLE instead of open-source alternatives?

    Reply
  41. Tomi Engdahl says:

    Power Electronics Driven By Electric Vehicles
    Infineon strongest player in power electronics, analysts say
    http://www.eetimes.com/document.asp?doc_id=1330021&

    Vehicle hybridization will drive the power electronics market for the foreseeable future, in what French firm Yole Developpment forecasts as “the golden era for electric vehicles.” If customers adopt the driving technology and governments keep on subsidizing the field, that is.

    The driving power of electric and hybrid vehicles (EHV) was outlined in Yole’s report Status of The Power Electronics Industry 2016, which also focused on market forecasts for wafers, inverters, MOSFETs and IGBTs through 2021. It also touches on industry consolidation.

    Last year was difficult for power electronics as the market contracted, with the global value of power ICs, power modules and discrete components decreasing 3% from $15.7 billion to $15.2 billion mainly due to a drop in the average selling price (ASP) of IGBT modules. EHV will represent a major part of the IGBT module market by 2021 while the power MOSFET market is expected to reach $1.2 billion for all applications in the same timeframe.

    Another trend identified by Yole’s analysts focuses on wide band gap (WBG) technologies, where the adoption of WBG solutions will be generalized. Many companies already offer WBG-enabled devices and/or are involved in different R&D projects.

    “WBG devices are used in some applications thanks to the high energy density they offer and their efficiency,”

    Reply
  42. Tomi Engdahl says:

    BASF Sells Its OLED IP to Universal Display
    http://www.eetimes.com/document.asp?doc_id=1330010&

    US-based Universal Display Corporation announced it has acquired the OLED Intellectual Property (IP) assets of German chemical company BASF SE, through its wholly-owned subsidiary UDC Ireland Limited for approximately €87 million ($96 million USD).

    The assets include over 500 issued and pending patents around the world, in 86 patent families, largely consisting of phosphorescent materials and technologies that could directly benefit UDC’s UniversalPHOLED development efforts.

    Reply
  43. Tomi Engdahl says:

    PCIe 4.0 Heads to Fab, 5.0 to Lab
    Next-gen debate–25 or 32 Gbits/s?
    http://www.eetimes.com/document.asp?doc_id=1330006&

    A handful of chips using PCI Express 4.0 are heading to the fab even though the 16G transfers/second specification won’t be final until early next year. Once it gets all the details sorted out, the PCI Special Interest Group (PCI SIG) aims to start work in earnest on a 5.0 follow on running at either 25 or 32GT/s.

    Cadence, PLDA and Synopsys demoed PCIe 4.0 physical-layer, controller, switch and other IP blocks at the PCI SIG’s annual developer’s conference here. They showed working chips, boards and backplanes that included a 100 Gbit/s Infiniband switch chip using PCIe 4.0.

    It’s been more than six years since the PCI SIG ratified its last major standard, the 8 GT/s PCIe 3.0.

    “We know we can take PCIe to the next generation, we just have to work out the details,”

    The questions about version 5.0 are many. They include determining if it will be backward compatible and still defined as a chip-to-chip link as all PCI standards have been to date.

    “We can’t play the encoding trick,” said Yanes noting the version 3.0 adopted 128b/130b encoding up from 8b/10b previously used.

    Demand will come from the usual suspects. Networking cards already hitting 100 Gbit/s rates will need faster chip links as will next-generation graphics processors and solid-state drives.

    Reply
  44. Tomi Engdahl says:

    Samsung’s 3D V-NAND 32L vs 48L–Just Vertical Expansion?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329980&

    TechInsights discusses the structural differences between Samsung’s 32L and 48L 3D V-NAND.

    Reply
  45. Tomi Engdahl says:

    Atomic Layer Etch Heats Up
    http://semiengineering.com/atomic-layer-etch-heats-up/

    New etch technology required at 10nm and below for next-generation transistors and memory.

    The atomic layer etch (ALE) market is starting to heat up as chipmakers push to 10nm and beyond.

    ALE is a promising next-generation etch technology that has been in R&D for the last several years, but until now there has been little or no need to use it. Unlike conventional etch tools, which remove materials on a continuous basis, ALE promises to selectively and precisely remove targeted materials at the atomic scale.

    It now is moving from the lab to the fab. Applied Materials, for example, has officially entered the next-generation etch market by rolling out a new tool technology.

    Meanwhile, Hitachi High-Technologies, Lam Research and Tokyo Electron Ltd. (TEL) are also working on ALE tools.

    If it lives up to its promises, this technology could help enable the next wave of scaled memory and logic devices. For example, in a 16nm/14nm finFET, the trenches or gaps between the individual fins might be on the order of about 40 angstroms or roughly 10 atoms across. (An angstrom is 0.1nm.) As the industry migrates from 10nm to 7nm finFETs, the trenches or gaps between the fins will shrink to between 10 to 15 angstroms or 5 atoms across.

    Typically, chipmakers use an etch tool to remove materials within these tiny trenches during the fabrication flow. But there are signs that conventional etch tools are struggling to do this job at these advanced nodes.

    ALE is just one of the technologies required for chip scaling. In fact, the industry will require a host of innovative techniques that can process structures at the atomic level.

    Atomic layer processing involves ALE and a related technology, atomic layer deposition (ALD). Generally, ALD is a process that deposits materials layer-by-layer at the atomic level, enabling thin and conformal films on devices. While ALE is just moving into the fab, ALD has been in production for several years. ALD is used in several applications, such as high-k/metal gate and multiple patterning.

    Reply
  46. Tomi Engdahl says:

    Hokkaido University have developed a memory structure that utilizes both electrical and magnetic signals. New concept, the traditional storage units, such as USB flash memory, it is possible to double the capacity, the Japanese researchers believe.

    The researchers came up with an opportunity to increase the capacity of the memory cell by applying the two strontium cobolt oxide (SrCoOx) of different shape, which constructions differ from each other. The memory cell can be switched by changing the oxide construction insulating / non-magnetic state of the metal / magnetic structure and the back of the electrochemical oxidation / reduction reaction means. The principle of making use of the memory can be e 0/1 to store information in addition to the magnetic information so the A / B.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4638:usb-muistin-kapasiteetti-kaksinkertaiseksi&catid=13&Itemid=101

    Reply
  47. Tomi Engdahl says:

    Lighting giant Osram is divided into two companies since July.
    In separation lighting LEDS and intelligent lighting solutions are transferred to Ledvance.
    Osram focuses on the development and sale of new lighting solutions.

    Source: http://www.uusiteknologia.fi/2016/06/30/osram-perusti-lediyrityksen/

    Reply
  48. Tomi Engdahl says:

    Nano-structured InGaN LED Yields White Light
    http://www.eetimes.com/document.asp?doc_id=1330024&

    The holy grail of LED lighting, achieving white light in the most efficient and cost-effective way is a hot topic, both among established manufacturers and in academia.

    Traditional approaches include color down-conversions, combining high energy LEDs emitting in the blue or near ultra-violet band with a mix of phosphors that re-emit at different wavelengths. Generally, this approach emulates an incomplete white light spectrum at a lesser quantum efficiency than the original emitter (the LED covered in phosphor). The phosphors’ limited lifetime compared to that of the actual LED illuminating them can also negatively impact the overall longevity of the white light.

    Other solutions combine multiple LED dies emitting at different peak wavelengths, but here again, white is a short-lived illusion, missing out on the natural continuum of true white light.

    A team of researchers from the University of Hong Kong is confident broadband white light could be obtained from monolithic LED dies. In their recently published ACS Photonics paper “Monolithic Broadband InGaN Light-Emitting Diode”, the researchers disclose promising results using high indium content InGaN-GaN quantum well structures grown on a sapphire substrate.

    Reply
  49. Tomi Engdahl says:

    Will Open-Source Work For Chips?
    http://semiengineering.com/will-open-source-work-for-chips/

    So far nobody has been successful with open-source semiconductor design or EDA tools. Why not?

    Open source is getting a second look by the semiconductor industry, driven by the high cost of design at complex nodes along with fragmentation in end markets, which increasingly means that one size or approach no longer fits all.

    The open source movement, as we know it today, started in the 1980s with the launch of the GNU project, which was about the time the electronic design automation (EDA) industry was coming into existence. EDA software is used to take high-level logical descriptions of circuits and map them into silicon for manufacturing. EDA software starts in the five digits, even for the simplest of tools, tacking on two or three zeros for a suite of tools necessary to fully process a design. On top of this, manufacturing costs start at several million dollars.

    In addition, a modern-day chip, such as the one in your cell phone, contains hundreds of pieces of semiconductor intellectual property (IP cores or blocks), and each one of these has to be licensed from a supplier, often paying licensing fees and royalties for every chip manufactured. The best known are processor cores supplied by ARM, but there also are memories, peripherals, modems, radios, and a host of other functions.

    The industry would appear ripe for some open source efforts so that the cost of designing and producing chips could be lowered, or perhaps better designs could be envisioned by drawing on the creativity of a huge number of willing coders. But while some projects have existed in both EDA tools and IP, none have even dented the $5B industry.

    Momentum is building again for change. At the design automation conference (DAC) this year, a number of speakers, executives and researchers addressed open-source hardware, and some new business models are emerging that may get the ball rolling. Some believe that once it gets started, it will be a huge opportunity for the technology industry, but most within the industry are doubtful. The problem is that without the full support of the fabs, IP suppliers and the EDA industry, it is unlikely to happen.

    There are significant differences between hardware and software, even though the languages are similar. One language in particular, SystemC, was meant to finally close that divide by adding constructs necessary to describe hardware into the C++ language. However, even if SystemC was a perfect language, it would not enable software developers to create hardware.

    “We can often do a lot of interesting hardware developments at RTL, but when the rubber meets the road, there is a capital cost associated with going to the fab,” says John Leidel, a graduate student at the data-intensive scalable computing laboratory at Texas Tech University. “This includes doing the masks and all of the verification. That capital cost is often not well understood, especially in academia and national labs.

    This is in stark contrast to open-source software. “In an open-source software environment you can develop software and it is just software,” Leidel notes. “If it doesn’t work, just rewrite it. There is only a human capital cost to do that. You don’t have to drop another million dollars on a mask set in order to fix your bugs.”

    There are also significant differences in business models. “Innovation starts out as being proprietary,”

    The timescales are also very different. “With silicon spins, it takes two years to get to market,” adds Teich. “No matter what you do, or how good the idea is, that is how long it will take to actually see functioning gates on the market.”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*