Portfolio of Projects on Human Computer Interaction (HCI)


Modeling the simple and complex cells in visual cortex area 1 and 2 (v1, v2) for Form detection

posted Dec 21, 2010, 11:33 PM by Unknown user   [ updated Oct 13, 2011, 12:14 AM ]

Guide: Dr. Atanendu Sekhar Mandal, Scientist - F, Central Electronics Engineering Research Institute, Pilani, INDIA. 

Simple cells in visual cortex area 1 and 2 (V1)  are known for their orientation selectivity and orientation tuning. The complex cells on the other hand respond well to at least some of the complex stimuli and explicitly represent complex shape information and suggest specific types of higher order visual information that V2 cells extract from visual scenes. In this work, mathematical models for simple and complex cells are presented. Using these models, a frame work for form detection was presented. 


The flow chart of the algorithm and the explanation is provided in the poster below. 




Publications: 
1) Modeling the Simple and Complex Cells in Visual Cortex Area 1 and 2 (V1, V2) for Form Detection, in an International Symposium on Medical Imaging - Perspectives on Perception & Diagnostics, held in conjunction with Indian Conference on Computer Vision, Graphics & Image Processing - 2010. 
The paper can be found here. 

Design & Development of adaptive search engine with near-real time performance for use in assistive applications

posted Dec 21, 2010, 11:32 PM by Unknown user   [ updated Oct 13, 2011, 12:15 AM ]


This project, aims at helping the elderly in old age homes, who have very less family support & for them to be able to connect to the world by simple voice driven commands. 

To make this adaptive for every human, we devised a mechanism to divide every sentence into an action-reaction analogy. So, every sentence the user says (input), is recorded and, over time an optimal match for that sentence is returned (output). 

Here in the program we call every input sentence as an "Action" sentence. This Action sentence triggers the search engine, which looks for the output (which we refer to as "Reaction" sentence). 

The most important challenge for the project was to selectively pick two sentence from a large set of sentences, which were collected over time. The map created between the two sentence was then verified after several iterations, before we could successfully create a one to one map. 

This process was carried out on for all set of sentences which were said by the elderly/user. After several attempts, it was found that, the algorithm is able to generate a one to one map for all the input sentences. 

The idea behind the adaptive algorithm is to develop an application which can cater to the elderly people and their ever extending needs. The algorithm developed has modules such that it learns according to human behavior and actions (which are logged and later analyzed).


Run-Time Complexity:-

The algorithm has been tested for huge sets of randomly generated input data and the complexity of the algorithm has been found out to be O(n). Techniques from Artificial Intelligence, Fuzzy Logic and Neural Networks were exploited for the project. 


Result:- 

For calculating the time complexity a log file was created with fairly large input (106 lines) and time was noted while the algorithm was analyzing the file. The time complexity thus obtained (O(n)) from the graph which was plotted (shown below). 




The figure above shows the number of matches between Action-Reaction pairs Vs. Input File Size. 

The flow chart below, explains the entire working of the project. 



This approach for the project would also be presented in the proceedings of HCII-2011, as a short-paper. (To appear)

The entire project is a part of the bigger project, Touch-Lives Initiatives

Future Implementation: 
Presently, it is a mobile based application. The future application aims to make the device invisible and suggestions in the form of spoken commands as opposed to the present implementation of output-text-display. Also, to make it available to the elderly at a very low cost.


Publications: 

1) An Optimal Human Adaptive Algorithm to find Action-Reaction Word Pairs. (Short Paper, In Press) Accepted for publication in the 14th International Conference on Human Computer Interaction. 

Feasibility Study of Using Driving Simulator to Investigate the Effect of Temporary Rumble Strips

posted Dec 21, 2010, 11:32 PM by Unknown user   [ updated Oct 13, 2011, 12:15 AM ]



In this project, we report a study on the feasibility of using a driving simulator to investigate the effect of temporary rumble strips on the car/humans. This study is part of the project which investigates how to effectively use rumble strips to reduce vehicle speed and increase driver alertness when they enter a short-term work zone. To reduce costs and avoid safety hazards, it is highly beneficial to use a driving simulator to study rumble strip design and placements instead of completely relying on field tests. 


However, it is crucial to establish the validity and to understand the restrictions of using the driving simulator. In this project, a sensor mote equipped with accelerometers is used to characterize the dynamics when a vehicle drives across rumble strips in field tests. Based on the data acquired, the driving simulator is instrumented to simulate the motions caused by the rumble strip.


Here I successfully simulated dynamics similar to that of a car (as found in field tests) for relatively low speeds (25 miles / hour). However, for higher speed, due to the limited 2D movement of the driving simulator, the noise was more prominent than the simulation data. 


The results obtained for 10Mph are shown below - 




The methodology used is highlighted below - 



A picture of the rumble strip used - 



For more information please contact: Dr. Nigamanth Sridhar, Dr. Wenbing Zhao, Arpit Agarwal



Driver Drowsiness Detection System (D3S)

posted Dec 21, 2010, 11:31 PM by Unknown user   [ updated Nov 1, 2013, 10:52 PM by Arpit ]

Self Initiated Project. 




Designed a real time drowsiness detection system where input parameters (Physiological: Eye blink rate, Eye blink time, Eye gaze. & Performance: Steering wheel angle, Accelerator & Brake pedal handling) are fed into a new Adaptive Neural Fuzzy Inference System algorithm which takes decisions about the drowsiness level of the driver with respect to the parameter's deviation from normal behavior. 


After the drowsiness level is predicted, a very specific alarm is generated which increases the alertness of the driver. This alarm is also called the brainwave which brings his/her drowsiness level out of dangerous levels. 


A video below shows the performance of the eye blink setup. 

Eye Blink Video



A flow chart of the algorithm is shown below - 




This project was awarded the "Innovation Award"  by the General Electric Company. 


Also, it was awarded a "Gold Medal" in the Annual International Project Presentation Competition held at BITS, Pilani. 


The Eye Blink Rate was calculated using IR sensors. Where closed eyes reflected incoming IR radiations, open eyes absorbed IR radiations. 


The Eye Blink Time was also calculated using IR sensors, calculating the time for which eyes were closed. 


Gaze direction was calculated using image processing algorithms which was used to alert the driver in case of abnormal gaze direction for a prolonged time. Also, algorithms were included to calculate the head drops both vertical and sideways. 


Performance Parameters like steering wheel angle, pressure on brake and accelerator pedals was used to characterize the state of human behavior. It was observed that when in tension (less consciousness) the peaks value of pressure on brake pedal was higher than during normal conditions. Also, the pressure on accelerator pedal had less amplitude, in such situations. 

The pressure on the brake and accelerator pedals was calculated using low cost potentiometers and a spring system, where pressure on springs made the potentiometers rotate. 

The steering wheel angle was also calculated using potentiometers and a mechanical setup. 

(Pictures of the above setup would be uploaded very soon). 


Emotional Activity could also be used for human behavior prediction. Where strong impulses are received when a person is undergoing stress. The two parameters which can be used are Heart Beat Rate & Skin Conductance. These parameters use ECG and EEG electrodes respectively but couldn't be interfaced in the device, due to lack of resources. 


The Fuzzy Filter puts all these parameters on a scale of 100. Where 100 denotes maximum drowsiness. These values are taken to form the training set (Human Adaptive) and are then used to compare with the real time input. The deviation from the training set values denotes drowsiness & a deviation above the threshold level becomes dangerous. 


Thought-Processor – A non-invasive method for the paralyzed to communicate

posted Dec 21, 2010, 11:30 PM by Unknown user   [ updated Oct 13, 2011, 12:15 AM ]

Self Initiated Project. 

An image processing and emotional analysis based real-time communication medium for patients suffering from locked-in syndrome or complete/partial paralysis. This project assists such individuals in communicating their needs in a simple and accurate manner. The image processing algorithm used, is aimed at obtaining the patient’s line of sight and a skin conductance circuit constantly measures his emotional activity levels. The project enables the patient to select from a list of basic needs, displayed on the screen. He/she can also type words to communicate with expressions with an accuracy of 30-40 alphabets per minute. 


This experiment was tested on a fairly large sample set (on healthy humans) and has proved a promising accuracy level, exceeding 99%. 


The computer screen developed is divided into 9 regions: 



The 9 regions of the computer screen


The user selects the available region using his/her eyes. Which is captured using image processing algorithms. As soon as the user moves his/her eyes on the target region, we get a sharp impulse. This impulse is captured and used to take actions. The impulse is shown below: 



Impulse

Skin Conductance circuit as shown below is used to measure the emotional activity:

Skin Conductance Circuit

Thus, using all these modules we make a decision about what the user wants and thinks. 

The USP of the device is that it is very cheap, $18/- only. 



Virtual Air Guitar

posted Dec 21, 2010, 11:25 PM by Unknown user   [ updated Nov 1, 2013, 9:55 PM by Arpit ]

Self Initiated Project. 

This project was awarded the Silver Medal in Apogee'10 - The Annual International Technical Festival of BITS Pilani, India. 

Virtual Air Guitar is a System on a Chip, which replaces the guitar by "hand gloves" and an electronic circuit where signals are send to electro-acoustic transducers to produce sounds. Here movements of fingers in the air produce the same sound as a guitar. 

The design of the Virtual Air Guitar (VAG) is shown below, as a schematic diagram. 

   
Right Hand (Top & Bottom)                               Left hand (Top & Bottom)

The boxes denote the position of the circuit boards. The mechanical system which determines the fret positioning is shown below in the diagram - 

Mechanical System Joining both the hands
B & C w.r.t. the palm of the left and the right hand.

This concept can readily be used for playing other instruments also. The device developed is just a prototype or a concept implementation. 

The movement of fingers on the "Left Hand" contribute to the choice of notes (or chords). 
Whereas the movement of fingers on the "Right Hand" is used to choose the string. At the same time, the choice of "Frets" is determined by the distance between the two hands (As we have it in a Real Guitar). 

In order to determine the "Strumming Pattern", an Accelerometer is interfaced on the Right Hand, which maps the acceleration values to the Amplitude of sound waves, produced by the transducers. 
Also, the swiftness with which the Right Hand moves, determines the Frequency of the sound produced. 
 
The sensors used for the experiment are, 
  • LDR's (Light Dependent Resistors), on the fingers. Used for the choice of strings, to be plucked.
  • Accelerometers, for determining the strumming pattern and the Amplitude of sound produced.
  • Wheel-Pulley assembly, which tells the distance between the hands, to determine the Fret chosen, for the chords. 
This same concept is easily transferable and can be applied to any musical instrument which can be played virtually and in the air, with mere hand movements. 

Preoccupo Consulo Lubricus (PCL)

posted Dec 21, 2010, 11:25 PM by Unknown user   [ updated Oct 13, 2011, 12:15 AM ]

Guide: Himanshu Dutt Sharma, Scientist, Government of India, Central Electronics Engineering Research Institute, Pilani.  

I developed
an intelligent system to predict the possibility of slipping in a dynamically changing environment. An accelerometer sensor was used to predict the possibility of skidding using physical-mechanics equations in the back end. This predicted decision was given to the driver alerting him before skidding and thus giving him more time to react to such a situation.


Four kind of situations were developed:

  • Skidding on Flat Roads
  • Skidding on Banked Roads
  • Toppling on Flat Roads, &
  • Toppling on Banked Roads
The final equations used are presented below: 


Equations to Prevent Skidding
 


Equations to Prevent Toppling

Now, after these equations were developed mathematical modeling was done to predict the deviation of values from normal values. Student-t distribution was used to calculate the average deviation. Since, 

Now, if 


tStop is time to stop, &
tSSRT is Time to respond to a stop signal

Then only an external source can stop the accident. Whereas within limits i.e. tSTOP < tSSRT, accidents can be prevented by alerting the driver itself. 


It is important, tSSRT is calculated using an experiment where a user is shown a page with a signal. If it is red then the user has to press left arrow on the computer & if the signal is green then the user has to press the right arrow on the computer. 


Time is calculated and plotted on a graph Vs. the Eye Blink Rate (EBR). (Please contact for more details). 


The EBR was calculated using the same technique as described in this project

Device Controlling System

posted Dec 21, 2010, 11:24 PM by Unknown user   [ updated Oct 13, 2011, 12:15 AM ]

Self Initiated Project. 


Designed a System on a Chip, which could be used to control devices in an industry or home, which are inaccessible for humans. This could also be used for old people for controlling all home appliances from only one single device and sitting in any place. Data communication was done using RF modules


The USP of the device is its low cost of only $8/-. 


This device runs perfectly within a range of 50m and was awarded the bronze medal, for the ease of use and low cost, in the Communication & Networks category at APOGEE’10 – The Annual International Technical Festival of BITS, Pilani. 

1-8 of 8