I've been using Eclipse for all development for a long time now. Moreover, I was also using SourceJammer as the versioning system of choice until some time ago. So, I wrote a SourceJammer plugin for Eclipse: for lack of a better name, I call it KSJEclipse. Now I don't need to leave the Eclipse IDE to perform most versioning actions. I can checkout/checkin, see the version status for all files in the projects. Makes my life really easy, one less program to leave open, file status is instantly visible speeding up development.
The binaries, documentation and source is available at
I have been using the Eclipse IDE for all development for a long time now: Java, C++ as well as UIs. Well, what can I say, I find it much much better than Netbeans. First of all, the IDE is very appealing. Secondly, its extremely powerful, and fast. So now, Linux or Windows, Eclipse it is :-)
On Linux , I like to use Xemacs and Vi. I used to carry out C/C++ development and LaTeX typesetting on Xemacs, and all Java development on NetBeans. On Windows, I also like to use EditPlus2, which is an amazing editor, customizable for nearly every language. I use Windows to carry out all my work in Adobe Photoshop, Macromedia Dreamweaver, Macromedia Flash, Visual Basic development, and some of the other Windows-only tools. Other useful tools are Putty for SSH and PSFTP for secure ftp.
The first program I wrote was in GWBASIC. Yeah, my relationship with programming goes back a really long way. Began learning how to program when I was 12 yrs old, and had achieved a considerable reputation for myself by the time I was 15.
When I was 13, I enrolled in a summer workshop to learn Dbase3+ (the hottest database at the time). I was the odd one out as the rest of the class was composed of students 25+ years old. But, it was very interesting and my first experience with databases was a good one.
My final project in Computers for the ICSE board examination was written entirely in GWBASIC and was 1.5 Mb in size, which was unheard of for school projects in those days. I had written DOS Tutor -- to teach DOS to the user. The project had some heavy duty pixel and DOS window color manipulation to create highly-animated graphics, used a primitive database (that I wrote, entirely in GWBASIC) to make a glossary, several menus, lots of exercises for the user, even storing the user's data on the file-system (my first experience with sessions). That glossary I feel, is one of the coolest pieces of code I have written, entirely browsable using arrow keys, just select a letter, hit enter, shows the words available in a window on the right-hand side of the screen, hit tab to go to the window on the right, hit enter on a word, and u can see the definition! These kind of interfaces are very common today, but were not at that time. Received full-marks for the project, graduated top of the class, securing a 95% in the nation-wide examination.
After GWBASIC, worked shortly on QBasic, didn't didn't get enough time to learn it, and then, I stepped into the world of C/C++ and it was a new world! Object-Oriented Programming, an awesome graphics library, control functions for the keyboard, so much more....I was in Heaven! I had access to Borland's TurboC++ at the time and wrote some amazing programs. I wrote applications with nifty graphics, database accessibility, some simple games, system manipulation, file management and much more. I found C and C++ to be so powerful that I almost never felt the need to learn another OOP language. But then I also discovered the sweet world of Java....
I would always refuse to learn Java, because I considered the coding to be bloated and the programs to be slow. True, it was supposed to be platform-independent but as a developer I did not think of the application of the programs I wrote --I was only concerned with enjoying the development experience. And I considered myself an expert in C/C++, so why learn another OOP language? Well, I had to learn it -- I was assigned a project which was to be written in Java and was to include an interface written using Swing. So I worked on Java, and I liked it. In the real-world portability is a major issue, and Java solves that issue. Moreover the sheer extent of libraries that are available to you in the Java framework make it very easy to do a lot of tasks. Now with skinnable look and feel decorations for Swing, with both Eclipse and Netbeans providing rich control application platforms, along with Eclipse's SWT, Java applications can look good too.
But both C/C++ and Java have their own niche. while C code is definitely amaaaaazingly fast, java code is portable. Its a cinch to write embedded software in C, not so in Java. But right now, I enjoy both equally :-)
I have always had an interest in AI, and so have given my share of attention to Prolog. But this language didn't really appeal to me, 'coz I couldn't see how it would be applicable in the real world, when it couldn't even provide a decent interface to the user. But I did get to be pretty good at programming in it. Then I didn't really develop on it for some time. However, I have recently found GNU Prolog which is a compiler for Prolog, allowing socket connections to Java and C/C++ programs. So I can actually use the power of AI provided by Prolog and combine it with the power of object-oriented languages. Having studied prolog twice (undergrad and master's) I am rapidly picking it up once again, and am enjoying the experience that this powerful combination provides.
During my sophomore year (2000), I developed an interest in Visual Basic -- the ease of designing powerful applications with full-fledged interfaces appealed to me. I developed several applications, the most significant of which are listed below:
1. Sleep Program -- I wanted to be able to sleep to music. This
program automatically shut down my computer after a specified interval
2. Library Management System -- I submitted this application
as my final project in junior year. The name explains all. Uses an Oracle
3. Buddy -- This is one of my masterpieces. I studied Microsoft's
API for Speech and MS Agent. It displays an animated character on your
screen, and responds to voice commands to control your computer. When
it starts up, it scans your computer for all installed programs. It employs
Windows API calls to control the computer so that you can open any program
just by saying its name. You can minimize/maximize/close windows, shutdown/logoff
your computer, control the screensaver, and so much more.
4. createPLS -- I run a mp3 server so that I can listen to my
music no matter where I am. The server has a php/html web interface. So
to play a song, I just have to drag its link into winamp on windows and
xmms on linux. This process of dragging each and every song one at a time
was getting to be a real pain. So to solve this problem I developed this
application which scans a specified directory tree for mp3 files and generates
playlist files in every directory, named the name of the directory they
are in. Can also automatically delete all PLS files in any tree. Now all
I do, is drag and drop the PLS file into winamp/xmms and automatically
get all the songs in my playlist :-) This program can also be executed
SleepProgram, LMS and Buddy have been written for Windows 98 and have not been tested on other version of Windows. createPLS has been developed on Windows XP but should work on all versions of Windows because it does not make any platform-specific API calls.
In the year 2002, I enrolled at Texas Tech University to get a Master's degree in Computer Science, majoring in AI Robotics. Dr. Larry Pyeatt agreed to be my Advisor and I got a small office in the AI Robotics Lab. As a RA in a robotics lab, I had to work on, you guessed it, robots!
There were several projects going on at the same time:
1. Dead Reckoning & Time Based Navigation (Adjudged top of the
2. Line Follower (Adjudged top of the class)
3. Mapping & Obstacle Avoidance Wandering (A Grade)
A Water Management System was to be developed as a joint project between the Department of Civil Engineering and Computer Science. As a RA, the remainder of this project came into my hands. The control system was to take data from pH, pressure and ORP sensors and insert the readings into an Oracle database. The system was also to provide a portable application and a web-accessible interface to monitor and control the system. This system was developed in C++, with the user interfaces written in Java/Swing. The software interfaces with the physical sensors using CORBA objects. I developed the Java/Swing interface making good use of JFreeChart and JFreeReport. After a couple of prototypes I was able to really improve on the interface, displaying a lot of data utilizing minimal screen space. I also developed the php/oracle web application so that user's can view the data from anywhere in the world, only requiring an internet connection.
I have studied this course twice, once as a sophomore and once during Master's. During Bachelor's this course was all about theory, but when I was taught this course during the course of graduate study, it was all about class presentations and projects. The theoretical concepts learned were to be actually developed.
The course involved writing different OS modules. The operating system was to be based on the Linux OS and to be tested on Simics. The different modules implemented during the coursework were:
Kernel: The core of the OS was a simplified version that was capable of interpreting a few internal commands and could load external elf compiled binaries. A few basic system calls were incorporated into this kernel.
File System: The file system was built from the scratch up to implement the ext3 file system. It involved writing block device drivers for handling access to the file system. Inode/Zone manipulation, as well as directory management was incorporated into this module.
Memory Management: A basic memory management system was also developed incorporating chunk allocation, developing malloc() and free() functions, and memory division into slabs.
Terminal Emulation: This module provided a character console for the OS to input commands, as well as an output console. It included a parser and a character device driver.
There were three class projects in this course at Texas Tech University. A very interesting subject where I actually applied and implemented what I had learnt about distributed computing. Threads, Marshalling, RPC, UDP Sockets etc. were implemented in the class exercises and projects.
1. Client Server Communication - being the first project, this
was relatively basic construction of servers and clients communicating
via sockets. There are three exercises with an increase in the level of
difficulty for each exercise.
2. RPC Communication -- developed a math client/server architecture
which would accepts messages containing an operand and arguments. It would
process the instruction and return the result message to the client. This
project employs Sun RPCs. The arithmetic expressions typed at the client
side generate RPCs to the corresponding procedures on the server.
3. Multiple Servers -- Clients send a math expression message
to a dispatcher server, which parses the message and sends the operands
to the corresponding add, subtract, multiply or divide servers. The expression
is processed on the intended server and the result is sent back to the
dispatcher and then back to the client. This project demonstrated communication
between multiple servers and multiple clients with one central server.
Reinforcement Learning is that branch of Artificial Intelligence where the agent learns with experience, and with experience only. No plans are given, there are no explicitly defined constraints or facts. It is a "computational approach to learning from interaction". The key feature of reinforcement learning is that an active decision-making agent works towards achieving some reward, which will be available only upon reaching the goal. The agent is not told which actions it should take to reach the goal, but instead it discovers the best actions to take at different states by learning from its mistakes. The agent monitors its environment at all times, because actions taken by the agent may change the environment and hence affect the actions available to the agent from the environment. The agent learns by assigning values to states and actions associated with every state. When the agent reaches a state that it has already learnt about, it can exploit its knowledge of the state space to take the best action. At times the agent takes random actions: this is called exploration. While exploring the agent learns about those regions of the state space that it would otherwise ignore if it only followed the best actions. By keeping a good balance between exploitation and exploration, the agent is able to learn the optimal policy to reach the goal. In all reinforcement learning problems, the agent uses its experience to improve its performance over time.
This was an amazing class!! The projects required implementation of the theoretical concepts learned into real world applications. Some of the concepts I learned in this course were data compression and coding, image and video indexing and retrieval and multimedia information systems. A brief description of the projects is given below:
1. Voice Over IP -- Interface with the audio device driver and realize real-time communication by sending audio packets over the Internet. source
3. Indexing of Scenes of a Compressed Video Sequence -- The objective
of this project is to detect scene changes in a compressed movie and index
them, with the output being presented as a mosaic of images. The compressed
movie sequence is a Mpeg-1 file. The program scans the MPEG-1 video sequence
and extracts frame data, identifying scene boundaries automatically without
decompressing the video data. For each scene, the program selects a picture
as its representative image for use in the index. Then the program decompresses
the chosen pictures and reduces their size to place them in the mosaic.
The project must return output in the form of YUV files of the final mosaic
image. This image is finally viewed by converting the separate files to
a ppm image. The programs were written in C++ and compiled using g++ version
3.2.2 on a Redhat 9 machine. The project was successfully ported to Solaris.
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The information processing system is composed of a large number of highly interconnected processing elements (neurons) working together to solve a specific problem. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze.
When I started working on TinyOS, there were about 40 persons using it, and now there are hundreds. TinyOS uses a C-style programming language and a custom compiler NesC. TinyOS was initially developed by the UC Berkeley EECS Department, as an event-based operating system for use with embedded networked sensors. TinyOS runs on motes which are available from Crossbow Technology Inc. I also wrote TinyOS programs for the MICAs, the MICA2s, the MICA2DOTs and the newer Telos motes.
TinyOS is one of the most interesting technologies that I have had a chance to work with. It gives the power of a computer to a tiny mote. Networking, Sensing, File System, etc., are available on these motes via TinyOS. Although I worked on several projects involving sensing, data collection, networking and serial communication, the two most significant projects that I developed were:
1. TCP -- reliable communication between two motes. Like any packet-transfer-network, packets can get lost. This layer allows for reliable communication between two motes, thereby completely eradicating the chance of data loss.
2. Routing -- The motes can be programmed to multihop packets. Say there are three motes aligned like this: A <---> B <---> C. Now I want to send a packet from mote A to C, I cannot send it directly because mote C and A are not within each other's radio range. So A will simply broadcast the message, B will receive it, detect that this message is not for itself and rebroadcast it. Then mote C will receive all packets being broadcasted from anywhere, detect that one of the packets is for itself, sent by mote A and so, will use it, instead of rebroadcasting it. However, the problem becomes much more complicated when there are many motes involved and you want the data packets to reach the relevant mote in the shortest possible time, without losing data due to collision. Developed a routing protocol which would run on each mote and figure out the best route to send a particular packet, to achieve the above goal.
This was my first experience with embedded software development, and it felt good. The feeling that you get when you see that some piece of code that you have written is actually making something happen (in a physical sense), is different from a regular computer program. With embedded systems, YOU are the operating system, telling the tiny pieces of hardware what to do with what parts of which data! Now that's control!
After my experience with TinyOs, I am really into hardware programming. I was delighted when I was assigned another embedded systems project. I was to write code directly for Texas Instrument's MSP430 microprocessors. So, no computer in between, no OS, just plainly code this pin, code that pin, input these bits and output those bits, light this led and blow that horn! It was amazing! I was to use TI's MSP430F449 Board which has 60k Bytes Program Flash, 256 Bytes Data Flash, 2k Bytes RAM, a JTAG connector, an LCD, one LED, 4 buttons, one buzzer and some more stuff. Now the hard part was understanding the processor manual. That took some time. There are no guidelines, just instruction A does this, pin B is for that, Apply High here, and a Low there -- the manual is not for beginners! But I got the hang of it and the first productive program I wrote for the processor was the LCD library. The programming is C style but uses native MSP430 mappings.
1. LCD Library
2. Clock Library
3. Buttons Library
To compile and install the code I had written I had to install the mspgcc toolchain for TI's MSP430 MCUs. I also had to install the JTAG library so that I could communicate with the processor for application management and debugging.
Well, I had two MSP430F449 Boards so now the next question was, how to make them talk to each other comfortably so that they can have a lasting relationship? I didn't have pins to spare, they were all being used for other stuff (sensors, leds etc). So how do I make the two MCUs communicate reliably, efficiently using the minimal number of pins? Philips had the answer: the I2C bus.
In the early 1980, Philips created the I2C bus. The I2C bus is a control bus for communication between various integrated systems in a system. It is a two-wire (one for data, one for the clock) bus and the protocol provides upto three levels of data transfer rate: upto 100Kbps, 400Kbps and 3.5Mbps. Two simple lines can connect all the ICs in a system, because any I2C device can be connected to a common I2C bus. A Master device on the bus can then communicate with any Slave device. The I2C bus protocol was ideal for this project. However, the MSP430s did NOT have any I2C controller. So now what? Here's what: I got down to writing a completely software implementation of the I2C bus. A longer description of the protocol is given here.
The I2C protocol defines a master-slave relationship between components, i.e. the master controls the clock, and asks for data from the slave or sends data to the slave. The slave simply responds to the master's requests. Only the master can initiate communication. There are usually many slaves on an I2C bus and only one master. But this project wanted more! Any of the two MSP430s should be able to become the master, taking control of the bus and making the other the slave. Took me two weeks, and 3 prototypes, but I did it! The entire application is completely interrupt-driven. Initially both the processors are in the slave state. Then one of them reaches a condition where it would either need to send or receive data to/from the other processor. So this processor becomes the master by dropping the data line while keeping the clock line high. The slave intercepts this signal as the start condition of the I2C protocol. Then regular I2C transfers take place. The master sends some bits, raises/lowers the data/clock line as required, the slave responds and raises/lowers the data line to send/receive the data (according to the protocol). At the end of the transfers, the master raises the data line while the clock is high. The slave understands this as the stop condition. The master processor then reverts back to the slave state. Once again both processors are slaves and anyone of them can become the master! A finite-state machine in the program maintains the state changes of the bus and the processors. The program is written entirely in C for MSP430s and can only be compiled using the msp430gcc toolchain.
It was amazing to see one of the processors display a message on its lcd and then, you hit a button and the same message appears on the other processor's lcd. The two processors are connected using only two wires. Since I wrote the clock library, to be completely configurable, I can really slow down the I2C communication and achieve data transfers between processors even if the distance between them is very high, and even if there are many devices on the bus. Since my application conforms to the exact I2C standards defined by Philips, it will work beautifully even if a hardware I2C device is connected on the bus.
Over the past few years, I have attained a certain degree of expertise writing papers in LaTeX. All papers I have written have been typeset using LaTeX. LaTeX is a document preparation system for high-quality typesetting. Since it is not a word-processor, the writer does not have to worry about the layout of the document. The writer's job is to get the content right, and to place it in a proper place in the document. Latex takes care of the rest. Global customization allows the user to apply changes to the entire document just by changing a few lines. While it does have a bit of a learning curve, I have found LaTeX to be a much better program for document preparation (especially, if the document has mathematical equations), than any word processor I have used.
JBoss Application Server
Now you can either write line after line of complex sql statements in your java code and access your database via jdbc or you can map your object classes to database fields in hibernate association xml files and let hibernate do the job for you. Thats the motivation behind any object-relational system. Hibernate lets you use your class structure to access the database. You define the methods that are available to the Database Access Objects to extract/push data from/to the database. I have used Hibernate extensively. All web applications I have written use Hibernate. The basic structure of the hibernate part of a web-application can be downloaded here. It shows the mapping of a sample "users" table, and DAO methods, and the hibernate.xml file.
Other technologies that I have worked on are Maven, Pluto Portal, uPortal, Struts Portals Bridge, PHP and more...
My interest in interfaces led me to take the initiative and develop the official website for the school where I was getting my Bachelor's degree in Computer Science (Its in India: a lot of institutes did not have websites at that time). Highly praised for that job, I was even more motivated to improve my skills. I came to the US and got a job developing web sites for the Department of Petroleum Engineering at Texas Tech. I have also designed the logo for the Rate-A-Raider website.
Concurrent Versioning System (CVS)
I have been using Windows since it was DOS! Ok, thats not entirely true, but the first version of windows was akin to a gui pasted on top of DOS. I was quite an expert with DOS; the no. of viruses that were hitting systems those days, you had to be an expert to survive! Then came Windows and I have been working with Windows ever since. Real Windows administration is all about hacking your machine -- get into the registry, find and change cryptic settings, locate tools on your windows machine that microsoft does not want you to know about, unless you are an expert, etc. I have done my share of the above, and am at ease with any Windows environment. While Windows provides simplicity by obscurity, it is one of the simplest OSs to use, if you don't want to do much.
I was the Network Assistant at the Institute of Technology & Management from 5/2001 to 5/2002 managing networked computers in the Advanced Computer Lab.
I have been administering Linux systems for a very long time now. I cannot consider myself an expert in managing Linux systems because with Linux, you will never know everything -- there is something new added everyday! But if there's something that needs to be done, I know I can probably take care of it.
Well, over time, I have set up and regularly maintain several servers:
Other technologies that I have worked with include:
I also run VmWare on my Windows machine so that I have easy access to Linux, even when I do not.
The past few years have seen tremendous growth in the research areas
of Mobile Robotics. While growth has been fast and several problems have
been very splendidly solved most mobile roboticists are faced with two
primary challenges: how will the robot gather information about its environment
and how will it know where it is? These two problems are referred to as:
Localization has not yet been attempted using Dynamically Expanding Occupancy Grids and a Centralized Storage System. This research was geared towards implementing Monte-Carlo Localization methods (Fox, Burgard, Dellaert, and Thrun 1999; Dellaert, Fox, Burgard, and Thrun; Thrun, Fox, Burgard, and Dellaert 2001; Fox, Thrun, Burgard, and Dellaert 2001) for robots using Dynamically Expanding Occupancy Grids. By using this approach this research aimed to provide a complete mapping and localization implementation for robots using dynamically expanding occupancy grids and a centralized storage system.
|Jetspeed Portals Usersfirstname.lastname@example.org