People show their emotions in many diverse and specialized ways, some of which a computer can be programmed to detect. By employing nanotechnology, a camera and image analysis software, some computers are able to observe a user’s body language and, with proper programming can accurately interpret a person’s posture, restlessness and various facial expressions like grimacing, smiling or scowling. Nanotechnology advances provide onboard sensors which can monitor heartbeats, breathing rates, fluctuations in blood pressure, and other subtle body changes such as skin temperature and voice inflection.
Because human skin has the capability of transmitting electric signals which can be utilized as a method of transmission, nanotechnology researchers have already been able to develop computers that are designed with nano sensors that have the uncanny ability to actually ‘see’ and ‘hear’ the people using them. Inevitably it is only a matter of time until the technology is available to create a computer that can readily identify whether their users are in high spirits or in a bad mood.
With ever advancing nanotechnology equipped computers, scientists figure it is entirely possible to develop a computer that is able to interpret a user’s mood via input it receives based on body language, voice tone and facial expressions and that it will be programmed to adjust itself by providing images designed to provide a feeling of comfort and serenity. Since emotions are ambiguous, transient and ultimately difficult to interpret, it would be very difficult for a computer to accurately construe the many human mood variances, regardless of how advanced the nanotechnology utilized. Therefore, in order to operate with any modicum of precision, a user would have to input the required data in advance.
Nanotechnology, with its sensor based abilities, gives programmers little problem with ‘intelligence’ based activities such as diagnosing a medical condition or participating in a game of chess, yet even with the major advancements in nanotechnology in recent years it is still somewhat of a challenge to design computers that accurately simulate human sight, audio functions, language interpretation and/or motor control.
Human vision and other sensory perceptions have evolved over billions of years and the how and why of their operations are still difficult to understand and/or simulate, while things like mathematics are explicitly taught and are, therefore, easier to express in a computer program.
Programmers are also attempting to employ nanotechnology advancements into programs that they expect to be able to accurately determine a person’s innate wishes regarding resuscitation should they fall ill and not be able to make that decision for themselves. Although, theoretically this information would be beneficial to medical teams, caution should be exercised whenever we allow a machine to determine matters relative to ethics. Regardless of the technology involved, machines are not equipped to differentiate between what is intrinsically right or wrong.