Martin Courtney, Computing, Monday 16 August 2010 at 16:21:00
Martin Courtney looks at the different approaches scientists are taking to harness the power of nanotechnology
Nothing is ever simple in IT, and nanotechnology is no different. For a start, the term nanotechnology can mean different things to different people. For purists, it refers to a microscopic structure equal to or less than one nanometre (nm) in size – about a billionth of a metre. But many vendors and regulators (see How EC rules affect nanotechnology, page 2) believe the term nanotechnology can be applied to any structure between 1nm and 100nm in size, which means various nanoscale silicon components and microchips already inhabit many of the computers and other electrical and electronic devices we use today.
“The typical definition that is used for nanotechnology is the analysis and manipulation of matter on a scale of 100nm or less, and just about all the IT hardware currently on the market today has that,” says Dr Paul Seidler, co-ordinator of IBM’s nanoscale exploratory technology laboratory in Zurich. “You find nanoscale structures like that in everything from semiconductors to displays, storage, hard disk drives and memory devices.”
Jim Tully is vice president and chief of research at Gartner where he specialises in semiconductors. He agrees that the “pure and original meaning” of the term nanotechnology has been altered in the past few years, mainly because of the way that nanotechnology is talked about within the IT industry.
“Previously, myself and my colleagues viewed nanotechnology as the pure and original meaning of the term, which was to start with individual atoms or molecules and build them from the ground up into working circuits. Now we see it more in terms of semiconductors made by starting with a big piece of silicon and etching away until you have this very small chip.”
Evolutionary versus revolutionary
According to Seidler, nanotechnology R&D projects can be classed as either evolutionary or revolutionary. The nanotechnology research being conducted by the big IT vendors is taking the evolutionary approach, and is essentially a continuation of the work that has been going on for almost half a century towards one specific goal: how to shrink the size of existing microprocessors while simultaneously increasing their power and decreasing their energy consumption.
“People are looking to nanotechnology to carry on Moore’s Law, which allows chips to be smaller, faster and consume less energy,” says Tully.
To this end, semiconductor giants such as Intel, IBM, HP and others are looking to build ever-smaller silicon circuits for use on processors and communications interfaces, paving the way for smaller, faster, more power-efficient computer systems and other devices.
Global revenue from all nanotechnologies, IT related and otherwise, are predicted to be worth $2.5tn (£1.6tn) by 2015, according to Lux Research, with as much as 50 per cent of all electronics and IT output expected to be nano-enabled by that time.
“With nanoscale technology, there is the potential to be substantially cheaper, which will lead people to throw away the old stuff even though it is fully working. There is a whole engine for growth [in the IT industry] which nanotechnology can drive,” says Tully.
But while semiconductor companies have been extremely successful in building silicon chips based on sub-100nm structures, getting those down to 10nm or lower is much more difficult. As well as problems in building manufacturing equipment that is actually able to control and manipulate structures on such a tiny scale, there are also problems with physics and natural limits that affect what can be done with silicon as a material itself.
“When you make structures using photolithography, you use either etching or disposition and the roughness of the edges you create depends on the physical and chemical processes you carry out,” says Seidler. “That doesn’t scale with the dimensions however, and there is some sort of natural limit – where a line under 100nm wide has a roughness of 10nm, for example, and that 10 per cent is a big problem.”
“The way that current systems have been made – where we etch and plate and build up and do a bit more etching – is a top-down type of technology which can get to very small sizes, but looks as if it will be fined down to about 8nm,” says Tully.
Enter the revolutionary
The barriers to the construction of nanoscale components could disappear when molecular methodologies become more mature, however, largely because the process becomes less reliant on etching silicon and more on chemical reactions that “grow” structures on a substrate material.
“The question is what to do beyond that, with the kind of pure [molecular] nanotechnology assembly that is a totally different process based on chemistry where you literally make things in a beaker,” says Tully.
This so-called revolutionary approach to nanotechnology is mostly still at the research stage, with one technology, carbon nanotubes, attracting particular scientific attention due to their unique electrical properties, which can be harnessed for semiconductors and computer displays.
“Carbon nanotubes have genuinely interesting mechanical properties, which mean they can act as either conductors or insulators and have integrated circuits built around them,” says Tully.
“The revolutionary side is looking for an alternative to the transistor as the switching element within circuits,” says Seidler. “But the dilemma with molecular electronics is that nobody really knows what the device is yet – and if it is just another structure that uses electrons to move the charge around, it is not very different, and energy consumption is still a big problem.”
What happens next?
There is still much work to be done with carbon nanotubes before their potential usage within semiconductors can be realised. However, if they are used alongside new types of plastic materials, or polymers, to create components, they can be produced far more cheaply and easily than equivalent silicon structures – something that could have a huge impact on the IT industry as a whole.
“The thing about polymer-based electronics is that it has the capacity to go even lower [on cost], while the manufacturing facilities will not cost anywhere near as much,” says Tully.
“Right now, if somebody wants to build a new semiconductor wafer plant with state-of-the-art technology, that will cost $5bn to $6bn, and there is only room in the world for about two or three companies able to do that. But with polymer-based facilities, that manufacturing capability could be extended to thousands of businesses which could fabricate their own type of semiconductors rather than buy in silicon chips from elsewhere.”
A move towards the light
Clearly, the ultimate success of molecular nanotechnology hinges on finding new materials as an alternative to silicon for use in semiconductors, which can also be sourced and manipulated into integrated circuits at lower cost.
To that end IBM, Intel, HP and others are also working on a technology that uses a completely different medium to transmit data signals within microprocessors themselves: light.
Photonics involves generating, transmitting and manipulating light, which has the advantage of travelling faster than an electrical field in a wire, with less signal loss and faster achievable switching distances, and also by using significantly less power.
IBM and others are also looking at on-chip solutions where the chip sends signals around the processor using nano-scale optical communications that squeeze as many as 2,000 switches into an area measuring one square millimetre.
In March, the company produced a nanophotonic avalanche photodetector “a few tens of atoms” in size and designed for high-bandwidth optical communications. The photodetector converts faint optical signals into electrical signals, and can process data at speeds of 40Gbit/s using 1.5V. The device uses a mixture of silicon and germanium, meaning it can be built using existing silicon manufacturing techniques.
The EU has also spent e1.92m (£1.6m) funding research into on-chip photonic systems, with German scientists having recently created simple networks of organic nanowires for use in next-generation electronic and optoelectronic components.
This research, dubbed the PHODYE project, was started in 2006 with the aim of constructing new sensor devices that combine dye sensor gas films and photonic structures into components able to monitor poisonous substances in the air. But the research also led to the creation of a methodology for connecting organic nanowires which may eventually be used to make cheaper and more flexible transistors and diodes on micro and nanoscale for use in electric circuits.
The nanowires are grown on the surface using silver nanoparticles by precisely controlling the substrate temperature, molecule flow and treatment, which can then maintain electrical contact with the original wires.
Elsewhere, the Photonics Research Group at Ghent University used high-resolution optical lithography techniques to produce a world map. The structure was relatively large, with the smallest features scaling to about 100nm. But its significance lay in the use of tiny strips of silicon, called waveguides or photonic wires, with low signal losses and improved properties for connecting the wires on the chip to external light sources, such as an optical fibre, which makes it much easier to integrate them with other components in electronic devices.
As well as exploring the use of photonics for use in short-range, high-bandwidth networks, HP is exploring how silicon-based photonics can be used to speed up chip-to-chip data transfers, particularly for use in datacentre servers, where the requirement for low-power, high-speed data transfer is particularly acute.
What happens next
But where does all this research take the IT industry in terms of product realisation, and how long will it take to get there? Tully believes that while working systems will be demonstrated within the next few years, it will take a lot longer for computers and components based on molecular nanotechnology to become commercially viable.
“We will not all be out buying computers based on carbon nanotubes within five years’ time – that is not going to happen,” says Tully. “But demonstrations of reasonably complex subsystems based on nanotechnology can be done within that time frame.”
More likely, says Seidler, is that these new nanoscale technologies will be successfully integrated with existing silicon structures first.
“It is not like we are going to see entire CMOS and CPU architecture replaced with carbon nanotubes, more likely that bits and pieces will be replaced first to create customised integrated circuits,” he says.
How EC rules affect nanotechnology
With so many contradictory opinions on what should and should not be defined as nanotechnology, it is no wonder the regulators are taking an interest in ensuring that researchers and manufacturers are all working to the same idea.
In July, the European Commission’s Joint Research Centre published a report on factors that should be considered when defining nanomaterials with the aim of reducing ambiguity and confusion for regulators, industry and the general public.
The report, Considerations on a Definition of Nanomaterial for Regulatory Purposes, was prepared following a request from the European Parliament and recommends that the specific term “particulate nanomaterial” should rely only on size (1nm to 100nm) as the defining property and should be recognised and used in appropriate legislation to avoid inconsistencies.
What little regulatory attention has been applied to nanotechnology in Europe so far, however, has been primarily concerned with the use of materials and nano-particles that could be potentially hazardous to human health and the environment. A proposed amendment to the directive of the restrictions of hazardous substances was tabled in June 2010 to include nanosilver and carbon nanotubes which may be used in the formation of molecular nanoscale semiconductors and integrated circuits destined for electrical and electronic equipment.
The EC has also introduced the registration, evaluation, authorisation and reduction of chemicals (Reach) code of practice or responsibility, but it only covers manufacturers producing more than one tonne of applicable substances per year – a huge number of nanoparticles (in contrast, the Canadian government has set a threshold of 1kg).
The big problem with this code of conduct, say critics, is that it is voluntary and is unlikely to be adopted by European companies if they feel it restricts their ability to compete for customers and revenue.
A strategy published by the Department of Business, Innovation and Skills in March set out ways in which the UK can promote the responsible development of all nanotechnologies across all sectors, including electrical and electronic equipment. In the true spirit of bureaucracy, it has set up a nanotechnology issues dialogue group to co-ordinate government activity and monitor progress, but specific regulation has so far failed to materialise.