Friday, December 30, 2011

Compressed Sensing: An Overview #2 (Group Testing)

Here we will consider a situation where N number of soldiers have to be tested for a particular disease using a sensitive but an expensive blood test. If blood samples from different soldiers were grouped together, the test would be sensitive enough to declare a positive if any of the soldier was infected. If there are K soldiers and where K>> then M = N measurement is to be done. If K=1 (priori) then M=log2 (N) is possible. Here global measurement is taken which is then followed by a reasoning algorithm. This was proposed by two economist during the Second World war for the screening of soldiers infected with syphilis. The proposal was to put the blood samples together as few people were likely to be infected by Syphilis, this saves tests on average. Though test was not put into practice the discussions on the subject is still being continued!! A similar principle is applied in compressive sensing.

The signal is projected onto a random signal of equal length. Each projection refers to a measurement. Each random signal used for measurement in the view of Fourier Series is a linear combination of several frequency component thus is equivalent to the extraction of information regarding several frequencies of the original signal. With adequate measurement the signal can be reconstructed using the L-1 norm optimization algorithm. The accuracy obtained by this is remarkable but is limited to a very strict condition. The signal should be sparse in terms of 'Information content'. Mathematically the number coefficients representing the signal must be small. Luckily most of the signals in nature satisfy this requirement and that is the reason why signals are compressible. 

Thursday, December 29, 2011

Compressed Sensing : An Overview #1

We shall try to get an analogy to the problem "Compressed Sensing"  through a few analogous example.

There are N buckets of gold coins where each of the coins weigh 2 grams except for a bucket which weighs 8 grams. The problem is to identify that particular bucket.

Approaching the Nyquist would be to weigh the coins from each bucket also called the point wise sampling. Otherwise we can number each bucket and accordingly take the coins with respect to the number of the buckets. Bucket 1 would give 1 coin and bucket 2 would give 2 coins up to n coins from bucket number n. 

Thus a total of 10 buckets would give us 55 coins. If all the coins were of 2 grams the total weight would be 110 grams. But if x is the actual weight the defective bucket would be (110-x)/(2-1.8). Thus just one measurement and the bucket number is retrieved. There is a critical assumption made to achieve the solution which is we have only one defective bucket. In fact this is the term 'Sparsity Prior'. Thus a linear measurement server the purpose. One information to retrieve thus a linear combination. More information would call for different linear combinations. This in fact is what is called as Group Testing

Monday, December 26, 2011

Nuit Blanche: Bilateral Random Projections

The blog below would be very helpful to update yourself with the Spars Processing in the signal processing applications.
Nuit Blanche: Bilateral Random Projections: Reminded of it thanks to Sergey , here is the simple but powerful: Bilateral Random Projections by Tianyi Zhou , Dacheng Tao . The a...

Sparse Processing: Time to Unlearn Shannon-Nyquist Theorem?

 Signal Processing: An ever dynamic domain since its evolution keeps changing its course day after day. With the increasing number of functions being pushed to sophisticated software algorithms there is very little room for the circuit level processing. This calls for a high degree of visualization and mathematical thought for the budding up engineers.

 The inevitable technique which when one looks forward for sampling the natural signals to the discrete domain was the all famous Shannon-Nyquist criterion. This calls for a very ideal condition of the band limited signal that would never in the universe ever. As the expansion of the frequency domain would call for a very high compression of the time domain and vice-versa which is the reason for all the errors arising due to the approximation of signals to band limited signals. Though we sample and transform the entire spectra of the signal we see that the required information is concentrated to certain hot-spots in the transformed signal. This is the feature that is well exploited in the convectional compressing technique.

  "Compressive sampling" has emerged. By using nonlinear recovery algorithms, super-resolved signals and images can be reconstructed from what appears to be highly incomplete data. Compressive sampling shows us how data compression can be implicitly incorporated into the data acquisition process, a gives us a new vantage point for a diverse set of applications including accelerated tomographic imaging, analog-to-digital conversion, and digital photography. 

 The recently discussed Sparse technology defines every signal in the universe as Sparsely distributed and aims at the acquisition of these sparsely distributed elements from the signal. Thus saving the energy, memory and also the computational complexity. Interestingly to be well versed in Sparse processing the only mathematical tool is solving the linear system of equation. With a mathematical mind and proper understanding of the L1 - Magic the entire compressive sensing problems can be computed easily. All it requires is a team of efficient mathematicians and engineers with proper visualization to the nature.

 The optimization techniques through this method has proved much efficient and faster than the convectional method. The mathematical proof for the Compressive sensing was provided by a Field Medalist " Terence Tao.

 This arena calls for a wide scope of research as various techniques like Inpainiting, Deionisation etc. which are otherwise complex could be solved by just the usual differential equation of heat.

We shall discuss in much more detail in the upcoming days.

Links that might help you.

Sunday, December 18, 2011

Future of the IT Industry???

 It seems that the traditional outsourcing is almost to its death bed. The Indian story of IT outsourcing has almost lost its sheen. Earlier global IT service providers have helped clients realize cost saving and effective methods through various outsourcing channels like Applications, Infrastructure, Business Process and Offshore Product Development. Indian IT industry created a niche for itself in the outsourcing marketplace by pioneering Global delivery model and developing strong differentiators like the Process Maturity and cost arbitrage. Benefits of outsourcing is obvious but it appears that client dissatisfaction levels too are growing across a varied dimension. Be it quality of service, innovation, speed of response to business changes or cost savings. In a dynamic world like ours today the only constant is change.Indian IT industry has its inflection point attributed to the dot com bubble burst, 2008 financial meltdown, cloud computing technology innovations, mobility and "booming" Indian economy.
 We often tend to look at life in a linear fashion, with the assumption that the past is the best indicator of the future and that the trend under observation will continue to grow (or fall). However, most trends in reality follow an S-Curve – with a slow and steady start, rapid growth, a slowdown, followed by a decline.
 The IT industry of India can be classified to three phases.
  1. Wave 1, which we can trace to the 80s and 90s, clearly established that Indian IT professionals were competent and could be trusted to deliver world-class work. This was the staff augmentation era of the industry, largely serviced through onsite services.
  2. Wave 2, starting off in the mid-90s and currently at its peak, established India as an offshore programming destination. With labor arbitrage as the basic value proposition, Indian companies established large offshore development centers that had competent technical staff, mature CMM processes and world-class infrastructure. While the trigger for Wave 2 was the offshore initiatives by companies like GE, Motorola, Nortel etc., the Y2K bug gave it the necessary momentum. Although things slowed a bit after the dotcom bust, the shrinking IT budgets actually gave an impetus to large Fortune 500 companies to use offshore centers as a mainstream sourcing option.
  3. With rising salaries, the appreciating rupee and recessionary pressures in the US, it is difficult to see the Indian industry continuing to sustain a 40-50% growth rate in the labor arbitrage mode. Hence there is a genuine question whether Indian outsourcing is on the decline.  This is the classic S-Curve in operation.  This calls for the third wave or what is called the "Outsourcing 3.0" according to Kaushik Chatterjee from Wipro Technologies.

 Outsourcing 3.0 or Wave 3.0– which is strategic, value-added and non-linear. Capitalizing on the large pool of technical talent available in India and the free availability of domain experts in the western world, Indian companies need to start making substantial investments in building intellectual property – not necessarily as packaged software, but also as frameworks, components, web services and the like. We need to move up to create solutions that have strategic impact and C-level visibility within client organizations. We need to own significant parts of the transformation initiative budgets and be equipped to convert our CMM advantages to predictable deliveries.
 The the next wave would take Indian software services exports beyond the $100 billion mark! Far from being a death knell for Indian industry, we see the decline in Wave 2 work as a necessary precondition for the emergence of the Third Wave. The industry giants like Infosys, Wipro etc. have already taken the leap to the emerging cloud solutions and other technological attributions.  The only question is the pace of leap required to sustain the Indian leadership over the prevailing threats.

  1. "Outsourcing 3.0: Road Ahead for the Indian IT Industry", Kaushik Chatterjee, ITSM 2o11, IPF 

Sunday, November 6, 2011

Towards Automatic Speech Recognition

Automatic Speech Recognition and Text To Speech conversions are promising areas of dynamic research. The idea of systems mimicking the vocal cord and recognition of the vocal cord  has been long developed. But the area seems still incomplete and calls for a lot more perfection. There are lots of proprietary  as well open source ventures in these areas like the NUANCE, AT & T Labs, Vox Forge, Sphinx etc. juat a few to name.

One principal knowledge source that we can draw to benefit machine speech recognition for long-term research is in the area of human speech perception, understanding, and cognition. This rich knowledge source has its basis in both psychological and physiological processes in humans. Physiological aspects of human speech perception of most interest include cortical processing in the auditory area as well as in the associated motor area of the brain. One important principle of auditory perception is its modular organization, and recent advances in functional neuro-imaging technologies provide a driving force motivating new studies towards developing integrated knowledge of the modularly organized auditory process in an end-to-end manner. Psychological aspects of human speech perception embody the essential psychoacoustic properties that underlie auditory masking and attention. Such key properties equip human listeners with the remarkable capability of coping with cocktail party effects that no current automatic speech recognition techniques can successfully handle. Intensive studies are needed in order for speech recognition and understanding applications to reach a new level, delivering performance comparable to humans.

Specific issues to be resolved in the study of how the human brain processes spoken language are the way human listeners adapt to non-native accents and the time course over which human listeners re-acquaint themselves to a language known to them. Humans have amazing capabilities to adapt to non-native accents. Current ASR systems are extremely poor in this aspect, and the improvement is expected only after we have sufficient understanding of human speech processing mechanisms.

One specific issue related to human speech perception, which is linked to human speech production, is the temporal span over which speech signals are represented and modeled. One prominent weakness in current HMMs is the handicap in representing long-span temporal dependency in the acoustic feature sequence of speech, which, nevertheless, is an essential property of speech dynamics in both perception and production. The main cause of this handicap is the conditional independence assumptions inherit in the HMM formalism. The HMM framework also assumes that speech can be described as a sequence of discrete units, usually phonemes. In this symbolic, invariant approach, the focus is on the linguistic/phonetic information, and the incoming speech signal is normalized during pre-processing in order to remove most of the paralinguistic information. However, human speech perception experiments have shown that the paralinguistic information plays a crucial role in human speech perception.

Sunday, May 15, 2011

Why not above 4 Ghz??????? TERAHERTZ still A DrEaM???

Over the years scientists have been trying hard on semiconductors - (microprocessors and memory) to increase the performance and speed simultaneously.The number of transistors in a single chip has gone up to 731 million in recent i7 processors; all the way from 2300 in Intel's 4004. This has been achieved only  because of the reduction of the size of these transistors which increases the problems faced by the scientists.

When one transistor works in conjunction with a million others, the chip can deal with huge amount of data at a high speed.This is possible due to the property of Silicon which of course fails at very small size arising due to the  issues regarding power consumption, heating etc. George Hanson states in his book "Fundamentals of Nanoelectronics" : "With the current technologies, if the rate of increasing device density were to continue, microprocessor would sooner be producing more heat per square centimeter than the surface of the sun". Thus scientists turn to solutions other than silicon thus entirely moving away from the CMOS technology. Graphene throws in some light here due to its peculiar chemistry and higher rate of electron flow. Though Graphene transistors can be commercially produced; digital signals cannot be processed using these until now. This is due to the zero bandgap of Graphene which prevents it from being turned on and off.

The processor speed frequency seems to have a ceiling though we have hyper threading and multiple cores. A possible and evident reason is that the transistors cannot get faster anymore. We know that the higher the strength of the electric field the faster the transistor would switch. But now the gate isolator underneath the gate on a 45nm process is just about 0.9nm in thickness - just a little more than a Silicon dioxide molecule. Jonathan Kang - CPU Architect on "The simpler matter is that the easy scaling we've had in the past where every shrink in transistor size would lead to faster transistors is over". Thus these calls for a shift towards newer materials.

Another hurdle in increasing speed would be the surrounding ecosystems which is an inevitable bottleneck. This can be within the component itself or the surrounding components in the process chain. Even the bus transport width can be a challenge here. Thus researches in these areas are inevitable. Its high time that the technology we aim today is producing exascale supercomputer which can perform MILLION TRILLION calculations a second. Though the figure sounds fictional like a GAZILLION let us realize it is very very far from the unmeasurable which is INFINITY!!

Thus Hardware developers really have a future ahead!! Why don't you be one among them?  

Wednesday, April 27, 2011

Reinforcement learning

Reinforcement learning is learning what to do--how to map situations to actions--so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. These two characteristics--trial-and-error search and delayed reward--are the two most important distinguishing features of reinforcement learning.
Reinforcement learning is defined not by characterizing learning methods, but by characterizing a learning problem. Any method that is well suited to solving that problem, we consider to be a reinforcement learning method. The basic idea is simply to capture the most important aspects of the real problem facing a learning agent interacting with its environment to achieve a goal. Clearly, such an agent must be able to sense the state of the environment to some extent and must be able to take actions that affect the state. The agent also must have a goal or goals relating to the state of the environment. The formulation is intended to include just these three aspects--sensation, action, and goal--in their simplest possible forms without trivializing any of them.
Reinforcement learning is different from supervised learning, the kind of learning studied in most current research in machine learning, statistical pattern recognition, and artificial neural networks. Supervised learning is learning from examples provided by a knowledgable external supervisor. This is an important kind of learning, but alone it is not adequate for learning from interaction. In interactive problems it is often impractical to obtain examples of desired behavior that are both correct and representative of all the situations in which the agent has to act. In uncharted territory--where one would expect learning to be most beneficial--an agent must be able to learn from its own experience.
One of the challenges that arise in reinforcement learning and not in other kinds of learning is the trade-off between exploration and exploitation. To obtain a lot of reward, a reinforcement learning agent must prefer actions that it has tried in the past and found to be effective in producing reward. But to discover such actions, it has to try actions that it has not selected before. The agent has to exploit what it already knows in order to obtain reward, but it also has to explore in order to make better action selections in the future. The dilemma is that neither exploration nor exploitation can be pursued exclusively without failing at the task. The agent must try a variety of actions and progressively favor those that appear to be best. On a stochastic task, each action must be tried many times to gain a reliable estimate its expected reward. The exploration-exploitation dilemma has been intensively studied by mathematicians for many decades. The entire issue of balancing exploration and exploitation does not even arise in supervised learning as it is usually defined.
Another key feature of reinforcement learning is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment. This is in contrast with many approaches that consider subproblems without addressing how they might fit into a larger picture. For example, we have mentioned that much of machine learning research is concerned with supervised learning without explicitly specifying how such an ability would finally be useful. Other researchers have developed theories of planning with general goals, but without considering planning's role in real-time decision-making, or the question of where the predictive models necessary for planning would come from. Although these approaches have yielded many useful results, their focus on isolated subproblems is a significant limitation.
Reinforcement learning takes the opposite tack, starting with a complete, interactive, goal-seeking agent. All reinforcement learning agents have explicit goals, can sense aspects of their environments, and can choose actions to influence their environments. Moreover, it is usually assumed from the beginning that the agent has to operate despite significant uncertainty about the environment it faces. When reinforcement learning involves planning, it has to address the interplay between planning and real-time action selection, as well as the question of how environmental models are acquired and improved. When reinforcement learning involves supervised learning, it does so for specific reasons that determine which capabilities are critical and which are not. For learning research to make progress, important subproblems have to be isolated and studied, but they should be subproblems that play clear roles in complete, interactive, goal-seeking agents, even if all the details of the complete agent cannot yet be filled in.
One of the larger trends of which reinforcement learning is a part is that toward greater contact between artificial intelligence and other engineering disciplines. Not all that long ago, artificial intelligence was viewed as almost entirely separate from control theory and statistics. It had to do with logic and symbols, not numbers. Artificial intelligence was large LISP programs, not linear algebra, differential equations, or statistics. Over the last decades this view has gradually eroded. Modern artificial intelligence researchers accept statistical and control algorithms, for example, as relevant competing methods or simply as tools of their trade. The previously ignored areas lying between artificial intelligence and conventional engineering are now among the most active, including new fields such as neural networks, intelligent control, and our topic, reinforcement learning. In reinforcement learning we extend ideas from optimal control theory and stochastic approximation to address the broader and more ambitious goals of artificial intelligence.

Feed Forward Control

 We all are familiar with the feedback control used in a closed loop controller. This one is something new; came across today from the lecture at MESCE by Prof. C. Chandrsekhar - IITM. Combined feedforward plus feedback control can significantly improve performance over simple feedback control whenever there is a major disturbance that can be measured before it affects the process output. In the most ideal situation, feedforward control can entirely eliminate the effect of the measured disturbance on the process output. Even when there are modeling errors, feedforward control can often reduce the effect of the measured disturbance on the output better than that achievable by feedback control alone. However, the decision as to whether or not to use feedforward control depends on whether the degree of improvement in the response to the measured disturbance justifies the added costs of implementation and maintenance. The economic benefits of feedforward control can come from lower operating costs and/or increased salability of the product due to its more consistent quality. Feedforward control is always used along with feedback control because a feedback control system is required to track setpoint changes and to suppress unmeasured disturbances that are always present in any real process. The continuously stirred tank reactor is under feedback temperature control. Feedforward control is used to rapidly suppress feed flow rate disturbances.
File:Control Systems.png
a) open-loop b) feed-forward c)closed loop.

This is a control technique that can be measured but not controlled. The disturbance is measured and fed forward to an earlier part of the control loop so that corrective action can be initiated in advance of the disturbance having an adverse effect on the system response.
Thus Feedback control is typically used to regulate a variable (or variables) in a control systems design which has time varying disturbances, and or operating parameters. It is also used when the accuracy afforded by feedforward controls is not adequate to meet the application performance specifications.

Sunday, April 24, 2011

Wireless Energy Transmission

The team MESCE is about to start a work on the ever awaited dream WIRELESS ENERGY TRANSMISSION. The idea was put forward by Mr. Nithin Kamal. It would be the technology of the century once we can viably use it. Saves a lot of energy copper and electrolocution by malfunctioning of cables. We look forward to bid farewell from cables.

  Do you think this s impossible? The wonderful property of electromagnetic induction and ionization of the air brings a small hope in us. If we can transmit radiowaves through modulation can we dream of transmitting a 60Hz 220V wave over a viable distance?

The electrodynamic induction wireless transmission technique is near field over distances up to about one-sixth of the wavelength used. Near field energy itself is non-radiative but some radiative losses do occur. In addition there are usually resistive losses. With electrodynamic induction, electric current flowing through a primary coil creates a magnetic field that acts on a secondary coil producing a current within it. Coupling must be tight in order to achieve high efficiency. As the distance from the primary is increased, more and more of the magnetic field misses the secondary. Even over a relatively short range the inductive coupling is grossly inefficient, wasting much of the transmitted energy.
This action of an electrical transformer is the simplest form of wireless power transmission. The primary and secondary circuits of a transformer are not directly connected. Energy transfer takes place through a process known as mutual induction. Principal functions are stepping the primary voltage either up or down and electrical isolation. Mobile phone and electric toothbrush battery chargers, and electrical power distribution transformers are examples of how this principle is used. Induction cookers use this method. The main drawback to this basic form of wireless transmission is short range. The receiver must be directly adjacent to the transmitter or induction unit in order to efficiently couple with it.
The application of resonance improves the situation somewhat. When resonant coupling is used the transmitter and receiver inductors are tuned to a mutual frequency and the drive current is modified from a sinusoidal to a nonsinusoidal transient waveform. Pulse power transfer occurs over multiple cycles. In this way significant power may be transmitted over a distance of up to a few times the size of the primary coil. Transmitting and receiving coils are usually single layer solenoids or flat spirals with series capacitors, which, in combination, allow the receiving element to be tuned to the transmitter frequency.
Common uses of resonance-enhanced electrodynamic induction are charging the batteries of portable devices such as laptop computers and cell phones, medical implants and electric vehicles. A localized charging technique selects the appropriate transmitting coil in a multilayer winding array structure. Resonance is used in both the wireless charging pad (the transmitter circuit) and the receiver module (embedded in the load) to maximize energy transfer efficiency. This approach is suitable for universal wireless charging pads for portable electronics such as mobile phones. It has been adopted as part of the Qi wireless charging standard.
It is also used for powering devices having no batteries, such as RFID patches and contactless smartcards, and to couple electrical energy from the primary inductor to the helical resonator of Tesla coil wireless power transmitters.

 If this available for such a small distance can we try fro may be a meter then  for 10,100s, 1000s.......

Kindly give your opinion.......


Thursday, April 7, 2011

BELBIC - Brain Emotional Learning based Intelligent Controller

BELBIC is considered as one controller which adopts the network model based from Moren and Balkenius.It is stated that some parts of the brain is responsible for our emotions.
The limbic system is comprised of four main structures: the amygdala, the hippocampus, regions of the limbic cortex and the septal area. These structures form connections between the limbic system and the hypothalamus, thalamus, and cerebral cortex. The hippocampus is important in memory and learning, while the limbic system itself is central in the control of emotional responses.

  • Amygdala - Almond shaped structure mass of nuclei.The primary roles in the formation and storage of memories associated with emotional events.
  • Orbitofrontal Cortex - A prefrontal cortex region in the frontal lobes in the brain which is involved in the cognitive processing of decision-making.The human OFC is among the least-understood regions of the human brain; but it has been proposed that the OFC is involved in sensory integration, in representing the affective value of reinforcers, and in decision-making and expectation
  • Thalamus - It is situated in the Midbrain and is responsible for processing and relaying movement and sensory information.
 Emotional Learning

Traditionally, the study of learning in biological systems was conducted at the expense of overlooking its lesser known counterparts: motivation and emotion.Motivation is the drive that causes any system to do anything – without it, there is no reason to act. Emotions indicate how successful a course of actions have been and whether another set of actions should have been taken instead – they are a constant feedback to the learning system. Learning on the other hand, guarantees that motivation and emotional subsystems are able to adapt to constantly changing conditions.Every creature has innate abilities that accommodate its survival in the world. It can identify food, shelter, partners, and danger. But these “simple mappings between stimuli and reactions will not be enough to keep the organisms from encountering problems.”

Saturday, April 2, 2011


 Time dilation is an observed difference of elapsed time between two observers which are moving relative to each other, or being differently situated from nearby gravitational masses. An observer will see the other observer's clock ticking at slower rate than his/hers. This effect doesn't arise from technical aspects of the clock or the fact that any signal needs time to propagate, but from the nature of space-time described by theory of relativity.
 There are two types:
    1. Relative velocity time dilation
When two observers are in relative uniform motion and uninfluenced by any gravitational mass, the point of view of each will be that the other's clock is ticking at a slower rate than the local clock. The faster the relative velocity, the greater the magnitude of time dilation. This case is sometimes called special relativistic time dilation. It is often interpreted as time "slowing down" for the other clock. But that is only true from the physical point of view of the local observer, and of others at relative rest . The point of view of the other observer will be that again the local clock is correct and it is the distant moving one that is slow. From a local perspective, time registered by clocks that are at rest with respect to the local frame of reference always appears to pass at the same rate.
    2. Gravitational time dilation
There is another case of time dilation, where both observers are differently situated in their distance from a significant gravitational mass, such as the Earth or the Sun. One may suppose for simplicity that the observers are at relative rest. In the simplified case, the general theory of relativity describes how, for both observers, the clock that is closer to the gravitational mass, i.e. deeper in its "gravity well", appears to go slower than the clock that is more distant from the mass. They agree at least that the clock nearer the mass is slower in rate and on the ratio of the difference.

Wednesday, March 23, 2011

What is TIME?

 "Time… is what keeps everything from happening at once"- Ray Cummings
Time is a part of the measuring system used to sequence events, to compare the durations of events and the intervals between them, and to quantify rates of change such as the motions of objects. Time has been a major subject of religionphilosophy, and science, but defining it in a non-controversial manner applicable to all fields of study has consistently eluded the greatest scholars.
Time is part of the fundamental structure of the universe, a dimensionin which events occur in sequenceSir Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time.Time travel, in this view, becomes a possibility as other "times" persist like frames of a film strip, spread out across the time line. The opposing view is that time does not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with space and number) within which humans sequence and compare events.


In 5th century BC GreeceAntiphon the Sophist, in a fragment preserved from his chief work On Truth held that: "Time is not a reality (hypostasis), but a concept (no√™ma) or a measure (metron)."Parmenides went further, maintaining that time, motion, and change were illusions, leading to the paradoxes of his follower Zeno.[35] Time as illusion is also a common theme in Buddhist thought,and some modern philosophers have carried on with this theme. J. M. E. McTaggart's 1908 The Unreality of Time, for example, argues that time is unreal 
However, these arguments often center around what it means for something to be "real". Modern physicists generally consider time to be as "real" as space, though others such as Julian Barbour in his book The End of Time, argue that quantum equations of the universe take their true form when expressed in the timeless configuration spacerealm containing every possible "Now" or momentary configuration of the universe, which he terms 'platonia'

Classical mechanics

In classical mechanics, Newton's concept of "relative, apparent, and common time" can be used in the formulation of a prescription for the synchronization of clocks. Events seen by two different observers in motion relative to each other produce a mathematical concept of time that works pretty well for describing the everyday phenomena of most people's experience.

Time has historically been closely related with space, the two together comprising spacetime in Einstein's special relativity and general relativity. According to these theories, the concept of time depends on the spatial reference frame of the observer, and the human perception as well as the measurement by instruments such as clocks are different for observers in relative motion. The past is the set of events that can send light signals to the observer; the future is the set of events to which the observer can send light signals.
In physicsspacetime (or space–timespace time) is any mathematical model that combines space and time into a single continuum. Spacetime is usually interpreted with space as being three-dimensional and time playing the role of a fourth dimension that is of a different sort from the spatial dimensions. According to certain Euclidean space perceptions, the universe has three dimensions of space and one dimension of time. By combining space and time into a single manifold, physicists have significantly simplified a large number of physical theories, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.
In classical mechanics, the use of Euclidean space instead of spacetime is appropriate, as time is treated as universal and constant, being independent of the state of motion of an observer. In relativistic contexts, however, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer and also on the strength of intense gravitational fields, which can slow the passage of time.
The concept of spacetime combines space and time to a single abstract "space", for which a unified coordinate system is chosen. Typically three spatial dimensions (length, width, height), and one temporal dimension (time) are required. Dimensions are independent components of a coordinate grid needed to locate a point in a certain defined "space". For example, on the globe the latitude andlongitude are two independent coordinates which together uniquely determine a location. In spacetime, a coordinate grid that spans the 3+1 dimensions locates events (rather than just points in space), i.e. time is added as another dimension to the coordinate grid. This way the coordinates specify where and when events occur. However, the unified nature of spacetime and the freedom of coordinate choice it allows imply that to express the temporal coordinate in one coordinate system requires both temporal and spatial coordinates in another coordinate system. Unlike in normal spatial coordinates, there are still restrictions for how measurements can be made spatially and temporally (see Spacetime intervals). These restrictions correspond roughly to a particular mathematical modelwhich differs from Euclidean space in its manifest symmetry.
Until the beginning of the 20th century, time was believed to be independent of motion, progressing at a fixed rate in all reference frames; however, later experiments revealed that time slowed down at higher speeds of the reference frame relative to another reference frame (with such slowing called "time dilation" explained in the theory of "special relativity" ). Many experiments have confirmed time dilation, such as atomic clocks onboard a Space Shuttle running slower than synchronized Earth-bound inertial clocks and the relativistic decay of muons from cosmic ray showers. The duration of time can therefore vary for various events and various reference frames. When dimensions are understood as mere components of the grid system, rather than physical attributes of space, it is easier to understand the alternate dimensional views as being simply the result of coordinate transformations.
The term spacetime has taken on a generalized meaning beyond treating spacetime events with the normal 3+1 dimensions. It is really the combination of space and time. Other proposed spacetime theories include additional dimensions—normally spatial but there exist some speculative theories that include additional temporal dimensions and even some that include dimensions that are neither temporal nor spatial. How many dimensions are needed to describe the universe is still an open question. Speculative theories such as string theory predict 10 or 26 dimensions (with M-theorypredicting 11 dimensions: 10 spatial and 1 temporal), but the existence of more than four dimensions would only appear to make a difference at the subatomic level.
The general definition of time in physics usually refers to Spacetime which can be measured in light years.
We shall discuss next of Time dilation.
A. Einstein, H. A. Lorentz, H. Weyl, H. Minkowski, The Principle of Relativity
Ghandchi, Sam : Editor/Publisher (2004-01-16). "Space and New Thinking"

Thursday, March 17, 2011

The Risks of Nuclear Radiation (ieee spectrum)

On Monday, 14 March, radiation levels at the Fukushima nuclear plants reported rose to 400 millisieverts but then quickly declined. In less than 24 hours, the levels were below 1 mSv. What exactly does that mean? How much is safe, and for how long? What levels should the residents of the town of Okuma and the rest of the Futaba District be worried about? What about the radiation risks to those downwind? Do they extend all the way to Hawaii and the North American west coast?
Watch the video from ieee spectrum
Click Here to Watch

IPv4 Internet Addresses Run Out

 As on February 3, 2011, the central pool of IPv4 addresses has been officially exhausted after the Internet Assigned Numbers Authority (IANA) assigned the rest of the blocks of address space.
APNIC, which provides internet addressing services to the Asia Pacific region, got in two /8s (33 million addresses) Tuesday which caused them to give out the last five /8s to Regional Internet Registries. ISPs and businesses are quickly using up any and all the IPv4 addresses APNIC is able to give out, so companies in the area are expected to be first in line to be affected by the IPv4 addresses running out.
 The company that tracks the allocation of IPv4 stated 1 February as X-Day or exhaustion day, which brought to mind pictures of Mad Max-style fights over the rest of the available IPv4 addresses.
Actually, this was predicted a long time ago but had stayed a distant prospect until recently due to the use of Network Address Translation (NAT) technology, which meant banks of business PCs all sat behind small ranges of IP addresses. A number of units of internet real estate are still sparingly used, with only about 14 per cent being used, said a study by the University of Southern California, was published on Tuesday.
 John Heidemann, leader of a team at the USC Viterbi School of Engineering Information Sciences Institute that carried out the study, said that as soon as the rest of the addresses are gone, there will be pressure to improve the use of them and eventually they will trade areas that are underused.
 However, doing that won’t be an easy task. But in the meantime, a lot of companies will survive by layering NAT devices, but in the long term the solution is to move to IPv6, which is the next generation Internet Protocol.
 Better usage, trading, and other strategies can help get back “twice or four times current usage” of IPv4, says Heidemann, who admitted that trading their way out of trouble will only help for a short period of time. He added that the requests for addresses is twice as much every year, so trading will only help for two years.
 IPv6 will give quite a bit more address space but even though it’s been available for 10 years, many networks don’t support it. If that doesn’t change soon the interweb will become fragmented sometime in the 21st century equivalent of a canals and railways transport system.

Sunday, March 13, 2011

IEEE India Industry Day 2011

A grand conference was held at the Chancery Pavilion on 10th and 11th of March, where the students got a chance to meet the Industries of India. The event was a great source of exposure for the students. Eminent persoalities like Dr. Howard Michel - Vice President-IEEE, Dr. Byeong Gi Lee - IEEE Com Soc President, Prof. Jhing-fa Wan - IEEE R10 Industry Liasion and Companies like Texas Instrumnets, IBM, TCS, Encore Technologies, HUWAIE etc. participated in the conference.

The discussion was basically on "Technology Challenges and Opportunities for Enhancing Quality of Life in India"

"India is a great country with great problems; every problem is an opportunity to the engineers" - Dr. Faqir Chand Kohli

The Hotel Chancery Pavilion

At the conference

Delegates from MES College of Engineering

In Smart Energy session Prof. AK Sinha, IITK emphasized the need of a smart power grid in the country and also explained the Phase Measurement Unit along with the normal SCADA  thus allowing easier and accurate measurement of the consumption.

In the consumer electronics Vinay Deshpande (Encore Software) emphasized the need for engineers to come forward for reducing the price of Electronic goods ithout compromising of Quality.
Prof. H S Jamgandi, CEDT, IISc. called for the need of the stuents to come forth to take risks and start up for innovations during their career. If a person in the west could create a Facebook due to a failure in life, Why not Indians?

In the banquet session Mr. Latif Ladid held a session on IPv6 protocol in Indian scenario and stated that the shift has become an urgent necessity.

In the wireless and Broadband session Mr. Asit Kadyan - DOT suggested that the Engineers should look forward to satisfy their basic amenities before introducing the latest technologies.
MESCE with Howard Michel

Tuesday, March 8, 2011

An initiative in BELBIC system.

Brain Emotion Learning Based Intelligent Controller (BELBIC), a system proposed by a renowned iranian scientist Caro Lucas has been selected as the topic for the mini projects under the guidance of Prof. Edet Bijoy. The work is in the beginning stages. Updates are to come up soon.

Our Work in Digital Audio Effects: Analysis of Digital Audio Effects Using Simulink and C6713 DSK

The work was presented at NCOIET'11 at Kongu Engineering College , Erode.
The entire work was done in the DSP lab of MESCE under the complete guidance of Prof. Edet Bijoy, Dept. of ECE.
Abstract: This paper shows the implementation of the echo and reverberation effects using the TI’s C6713 DSK. The effects are simulated using Simulink. Those simulation models are used to generate the DSP code for the real-time implementation. Echo and Reverberation are two of the simplest applications of Digital Signal Processing. Artificial Reverberation is one of the most interesting DSP applications in music preparation. If the
individual channels are simply added together, the resulting piece sounds frail and diluted, much as if the musicians were playing outdoors. This is because listeners are greatly influenced by the echo or reverberation content of the music, which is usually minimized in the sound studio. DSP allows artificial echoes and reverberation to be added during mix down to simulate various deal listening environments. Echoes with delays of a few hundred milliseconds give the impression of cathedral like locations. Adding echoes with delays of 10-20 milliseconds provide the perception of more modest size listening rooms.

Keywords- echo, reverberation, simulink, C6713 DSK.

Monday, March 7, 2011

Cynthia Breazeal: The rise of personal robots | Video on

Cynthia Breazeal: The rise of personal robots | Video on

What is Free Software?


Welcome one and all

An initiative to share whatever little I've come across in the world of technology.

I would like to give a brief account of a few of the subjects that would be discussed here.I believed technology and knowledge is to be shared for the welfare of humanity and thus it would advance. There would be three types of posts.

  1.  News from MESCE - It includes information regarding the technical activities in the college
  2.  My interest - It would be purely on Signal Processing and soft computing and of course on Free source activites ie. FREEDOM.
  3.  Informative - Would be based on articles in ieee spectrum and research news from the MIT

    All your suggestions and comments are always welcomed