You are on page 1of 17

Director's Message

It is my great pleasure to declare that the initiative that started two years ago will reach yet another milestone with the third issue of Anveshan ready for release. Anveshan has provided the means for sharing technical knowledge and knowhow and has done an admirable job of it. Technology, today, is increasingly becoming a multidisciplinary affair. Boundaries are fast disappearing as technologies are merging into each other. In the United States there are a number of doctors taking up technology for post graduate studies. India is fast catching on. I strongly encourage our students to not only master their own discipline but also to look beyond and investigate other areas where their learning may find application. Our beloved NIT Durgapur remains as young and vibrant as it has ever been. NIT Durgapur has taken initiatives of its own to better itself in various fields. Sizeable investments that were made to improve the infrastructure and the living conditions in the hostels have started to take shape and is set to change the face of the campus when completed. On the academic front, a significant number of students graduating this year from the institute have opted for higher studies and this is very heartening, even though placements have picked up. Many have been selected to reputed universities abroad. Students have also put up a stellar show in competitive examinations like GATE, CAT and GRE. I am confident that the institute will keep making progress toward further consolidation of its position as a premier technical institute and I urge one and all to strive toward achieving this end.

What's Inside
COVER STORY
14 Cryptography Unlocked
What you see is bogus, but what it can reveal is precious!

Anveshan

13 Destination Detroit
Advanced is the mind that does not stop at excellence

UPDATE
2 4G is Coming-Are we Ready?
And you thought 3G was new !

INVENTIONS & DISCOVERIES


11 The Lost City Found
Have we fully explored the Earth?

5 Peak 65
The speed barrier must increase and so must the mileage

24 Weird In-vain-shuns
Inventions of Eccentric geniuses

20 WiMax
Wi-Fi on steroids

25 Give Felisa A Job ( GFAJ-I )


Totally different way of origin of life is speculated

26 Quantum Computing
Science takes a quantum leap into the future

CHALLENGE/STUDY
4 A Matho-Technical Problem
It remains unsolved or unchallenged for the past 4 years

OPINION
8 Online Karma
It's difficult to think of something which you can't do on a smartphone

9 Just Google It !
We want to make Google the 3rd half of your brain-Sergey Brin

6 The Completeness of Godel's Incompleteness Theorem


Truth beyond all truths or truth beyond human reckoning?

23 Incredible India

From Team MNTC


From the earliest of days when someone dared to assign a value to nothing to just a few decades back when someone coined an idea as 'ludicrous' as that of space and time warped together, thus revolutionising all previously held notions, science has always been about challenging previously unquestioned norms, about choosing the paths less travelled. This spirit of logic and reason, the same scientific temper is what connects something as old as the Konark temple to something as futuristic as Wimax or 4G communication. We, as engineering students often get used to science such that we forget to appreciate its charms. Anveshan is an attempt to help us reconnect with the fervour of scientific temper and be overwhelmed by its grandeur. In this edition of Anveshan, we venture across space and time; we hop between the domains of science and religion; we tread the fine line between the exactness of reason and the formlessness of imagination... The enthusiastic response from students, professors and the alumni which followed the inaugural and subsequent editions of Anveshan has encouraged the Maths 'N' Tech Club to work harder for the present edition. We are deeply grateful to Professor T.K Sinha for his patronage. And we also owe a lot to our faculty editorial board without whose help this magazine would never have been completed. We encourage the readers to submit original research notes, opinions or other general ideas that are within the scope of the magazine for publication in subsequent editions. Any suggestions for the improvement of the magazine will be wholeheartedly welcomed at mathsntech.nitdgp@gmail.com. This world, after all our science and sciences, is still a miracle; wonderful, inscrutable, magical and more, to whosoever will think of it. So, let's think... Let's read!

Certain expressions tend to get magnified and others reduced

28 Mind-venture
Creative geniuses are methodically uneven

REVIEW
18 Dragon Age II
A prologue to the finale of the trilogies or more ?

19 UDRS
Culprit in many on-field debacles?

22 2G Scam
I'm sorry coalition Dharma stops me

TECH-MUSE-MENT
10 The Science Behind Music
You will never hear a robot perform in a concert

12 Technology in Cinema
No more black & white, that's only for newspapers

Faculty Editorial Board


Dr. Kajla Basu Dr. Seema Sarkar (Mondal) Dr. P. P. Sengupta Dr. Aniruddha Chandra

17 Bad Science in Sci-Fi Films


Don't break those laws of science which are known to exist

21 Stream Live
FMS-A rich media delivery platform

NIT Durgapur
Are We Ready then? Let us glance over some interesting facts: By the end of 2011, 89% of Norway's population will have access to 4G. Finland is the first country in the world to make high-speed Internet access a legal right, obliging operators to provide connections of at least 1Mbps to every citizen. How people boast in South Korea? 'You don't realize how much the Web has to offer until you get into Korea'. Sure they can, as two-third of households is having at least an 8 Mbps connection. Not only that, Korea's network is fastest and cheapest in the world. According to World Bank, a 10% increase in broadband penetration increases GDP of a developing country by 1.38%. We can go on appending the list for ever, to show how badly we need to be 'connected', and how some countries have a l re a d y re a l i ze d that. In some advanced countries like US, however, there has been already a huge investment (trillions of US$) in 3G, and it would take some time to change over. This also explains why the less developed states in US expect an early 4G rollout. There is yet another aspect, newly developed countries favor new media, but westerners spend at least three times as much on traditional media (newspapers, magazines, books, movies, television, video games, and CDs) as on Internet access. In nonWestern countries, a ratio of less than 2:1 is not uncommon. In Vietnam and Pakistan, consumer spending for Internet access actually exceeds purchases of traditional media. Said all that, the reality still is, developing Asian countries are not much willing to spend to be entertained and informed. In 2008, Norwegians spent US $1522 per capita on media and Internet access. On the other hand, the average Chinese person spent about as much that year ($38) as a Norwegian did in nine days. (To be sure, China's 2008 per capita GDP was just about one-tenth Norway's; the gap in disposable-income spending is presumably narrower.)

Anveshan
Now, the most important question, what are the plans for India? Are we doing anything to bring the new technology in, or just busy with the 2G scams or happy with newly launched mobile no. portability? Till now, the main emphasis has been on cable/DSL technologies. Since spectrum for 3G and broadband wireless access (BWA) services has now been auctioned, mobile broadband sector will also gain momentum. A national broadband plan is now in action and is expected to be executed by the year 2013. The telecom regulatory authority of India (TRAI) invited open proposals for 4G in February, 2010, more than a year back. According to press reports, TRAI is going to come up with 4G recommendations by middle of this year. On the other hand, the Department of Telecommunications (DoT) is working on new frequency allocation plan (NFAP) on the basis of which a decision will be taken on the type of wireless technology to be used in a particular spectrum frequency band. The new NFAP is expected to get completed by April this year. DoT after finalising NFAP will submit its report to International Telecommunication Union (ITU) - an international body which sets standard for technology to be deployed in a particular spectrum frequency band. Impressed enough? Well, I'm keeping my fingers crossed. Effect on Body and Mind Living a 4G life means more time spent with our favourite gadgets. A natural question therefore is, whether our body and mind are ready for this revolution? More specifically, are 4G devices safe? Over the last two decades, epidemiologists (who studies patterns of diseases/ health risks in population groups) have tried hard to settle the question, whether or not cell phones cause health problems without coming up with a definitive answer. On the other hand, the cell phone industry has maintained that it's unlikely that the phones are a health risk because the only effect on brain tissue is local heating, and cell phone standards make sure that heating stays below any danger level. Meanwhile, some recent experiments reveal that holding a cell phone to the ear not only increases the temperature but increases the metabolic activity of nearby brain tissue too. What this means for long term health is unclear, but the news certainly encourage those panic-stricken parents who repeatedly tell their siblings to limit their talk time. Apart from the health hazards, there are already many social concerns regarding cell phone usage, from annoying others to being run over by trains. With many added services in 4G, the addiction will reach new heights. Last month, a Chinese man has died after a three-day online gaming session in which he did not sleep and barely ate. Prior to that, in 2005, a 28-year-old man died in South Korea after playing online games for 50 hours without a break. Would you dismiss them saying, 'these are discrete events', or all of us are heading towards a crazy society? Spare me please, I'm just an engineer.

Dr. Aniruddha Chandra


Assistant Professor, ECE Department, NIT Durgapur

he wireless industry is experiencing a never-happenedbefore growth rate in an ever-lasting manner since its inception. As of now, there are more than 5 billion cellphones (less than 1 billion ten years ago) over the world with China being the leader (853 million), while India (771 million) catching up fast (growing 19 million/month), making US a distant third. One of the major reasons behind this boost is the Internet experience; users are shifting their landline Internet habits into the mobile world. People on the go want access to the same content they access on their tethered computers, while people in many developing countries are accessing the Internet for the first time via their mobile devices. What is 4G? To the newbies, the letter 'G' refers to generation and signifies a major change in the fundamental nature of the wireless service. For example, 2G represented the switch from analog phones to digital ones. 3G brought multimedia support (MMS, online browsing). The 4G technology will provide voice, data, and streamed multimedia to users on an anytime, anywhere basis, and that too at a speed of about ten times faster than the 3G services running currently. As per the 4G standards, the cellular system must have a target peak data rate of 100 Mbps for outdoor environments (high mobility) and up to 1 Gbps for indoor access (low mobility). Opportunities The 4G technology opens many new opportunities for wireless users. To put it in short, with 4G we can access Internet as we do today, without having to be connected to a cable or a Wi-Fi zone with limited range. 4G is so fast that an attachment of 500 MB can be downloaded in a few minutes and sending image files doesn't involve any glitch. One may watch YouTube with the highest quality, stream live HD movies, or play online games without any delay. It is envisioned that 4G will drive applications beyond basic consumer broadband access, including e-readers, machine-tomachine communications, telematics, and mobile IPTV. What's more, the new super-network will help to change our mobile and Internet habits. The existing 4G users in some developed countries already experienced dramatic changes in their media consumption habits. They watch more online TV and listen more to web radio stations. There would be a change in the consumption of Internet services too; for example, a recent survey shows that more than 46 percent people now surf the web more frequently when away from home and 12 percent play more online games.

4G ensures that the service is constantly provided to the user, irrespective of the type of network (cellular, Wi-Fi etc.) available in his/her vicinity. This is possible as 4G is IP based (every user has an IP address) and it integrates the infrastructure of all current networks. Another key feature of 4G networks is high level of userlevel customization, i.e. each user can choose the preferred level of quality of service, radio environment and so on. Accessing 4G networks will be possible virtually by using any wireless device such as PDAs, cell phones, and laptops. World's First 4G On December 14, 2009, TeliaSonera became the world's first operator to commercially launch 4G networks in Stockholm, Sweden and Oslo, Norway. This was followed by the deployment in Finland (Helsinki and Turku), Denmark (Copenhagen, Aarhus, Odense, and Aalborg), and Estonia (Tallinn, Tartu, and Kohila). The company also carried out market pilots in Latvia and Lithuania. In the notebook market, we already have Samsung X430 on sale in Sweden, which is available for roughly 50K. The X430 laptop is world's first Samsung X430 with 4G Modem laptop with a 4G modem allowing users to surf the Internet to up to 100Mbps. The modem is also backward compatible, which means in case of unavailability of 4G, a user can still connect to 3G and GSM/EDGE networks. 4G Standards There are two viable standards, worldwide interoperability for microwave access (WiMax) also known as IEEE 802.16e and long term evolution (LTE), competing fiercely for being adopted in the initial rollout of 4G networks. WiMax had a head start and will remain on top for 2011 with 14.9 million global subscribers, up from 6.8 million last year. But LTE deployments will continue to jump, reaching 10.4 million users this year, a notable increase from just 0.7 million in 2010 and virtually zero in 2009. About 10 service providers around the world have already kicked off their LTE networks so far, with more than 30 new operators expected to launch theirs this year. Although there is a new standard IEEE 802.16m ready for WiMax group, by 2014, the number of LTE subscribers will hit 303.1 million versus only 33.4 million for WiMax. Thus the battle between LTE and WiMax for 4G dominance may soon be over with LTE declared the champ.

SoundHound
2

Ever caught yourself wondering which tune is playing in a shop? SoundHound, a mobile app. will listen to any tune, human or speaker-created.

No 2 CO2 With an installed capacity of 8,696 MW (as of July 2008), India has the fourth largest installed wind power capacity in the world.

NIT Durgapur

Anveshan

Madhyama Thakur
3rd Year, ME, NIT Durgapur ere is a mathematical and a technical problem rolled into one. I played with this 'live' problem for quite sometime and challenged many to solve this problem verbally of course. Fair to say, it remains unsolved or unchallenged for the past 4 years. This is the first time I am writing it out.

decides to increase the number of presses accommodating double mould cavities. What is the result? Again less than 50% (acceptable products) of what he thought he must get. Infuriated, he goes for '3 mould cavity' presses. He then increases the number of presses to 10. And also increases the number of operators and workmen to run the operation. He backs it up by increasing the number of supervisors to look after the operation. He also increases the number of overhead cranes from one to two. How did that turn out? Again less than 50% of what he calculated would be the output. Baffled, he then thinks to improve the system and institute a system of quality culture. He also thinks of training the workers and the supervisors to do their jobs better and pay close attention to the performance of the machines and moulds and the way rubber is injected into the mould cavities -- trying to lessen the time and the apparent wastages in the system. What happens? The output refuses to move even a percentage point above 50%. Can you crack this stubborn, nagging and chronic problem for the factory owner? He would be indebted. What would be the right thing for the owner to do? Send me your answers at dde@reliabilityconsultant.net or choose to strike up a conversation on Twitter @Sparkinginsight

Would you love to challenge your brains on this nagging problem? Here it goes for you: A certain factory producing goods made out of rubber decides to raise its productivity. The rubber products are made in moulds that are placed in a hydraulic press and kept there under specified conditions of temperature, pressure and time for proper curing and formation to take place. The hydraulic operated presses have one mould cavity to place one mould at a time. The owner comes up with a simple plan to improve productivity. He decides to increase the number of mould cavities in the hydraulic press allowing him to process multiple products at the same temperature, pressure and time. So, he goes for 'two mould cavity' presses and quickly replaces the old presses by these two mould cavity ones. He then calculates the possible output if he does that. Let us say that he would get 100 products by changing from single mould cavity press to double mould cavity press. But he is dismayed when he finds out the actual output. It is less than 50% (acceptable products) of what he expected to get. How is that? He thinks that something must be wrong. So he

n this age of rapidly changing technologies and fast paced life, all the barriers are to be crossed and all limits to be explored further. The task for engineers is indeed becoming tougher and tougher with more and more demands to be met, along with rapidly exhausting fuel resources and stricter environment and emission norms to be met. The speed barrier must increase and so must the mileage. The torque and power must not be compromised upon, and nor should the CO2 emissions cross a certain barrier defined by the latest Euro IV (or their Indian equivalent, Bharat IV ) norms. The strife for a better, more efficient engine becomes even more relevant in today's world because of the threat caused to the earth due to the ever increasing global warming A research study published in the October 2007 issue of the ASME magazine illuminates the efforts of Robert Socolow, a Mechanical Engineering Professor from the Princeton University and Stephen Pacala, the co-director of Princeton's Carbon Mitigation Initiative. Their wedge analysis listed 15 out of the many possible ways to cut 25 billion tons of carbon. To defeat global warming, they said, we need to enact seven of them. One amongst these seven is to increase the mileage of the present day automobiles from the average present day mark of around 35 mpg to 65mpg (miles per gallon). Concentrated efforts on nearing this goal would indeed serve a great purpose and presently many researchers are working on increasing the fuel efficiency of vehicles. The conclusions from a research project I did at Indian Institute Of Technology, Delhi, in May-June 2010, are presented here. The figure shows the relationship between mileage and emissions. The data is restricted to K series SI Engines, but it holds true for other engines as well.

As can be clearly seen, higher the mileage, lower the emissions. The conclusions from the figure once again emphasise upon the dependence of emissions on mileage. Keeping in mind the Socolow-Pacala study and the conservation of earth, it is imperative to find out ways of increasing the mileage of cars till 65 mpg. Keeping in mind this background, it is indeed inspiring to know that the Shell Eco Marathon winner 'PAC CAR II' has a staggering mileage of 12,666 mpg!! The race began in 1939 with a bet between employees of Shell Oil's research laboratory in Wood River, Illinois. There are now three races per year in France, UK, and US. Each year college and high school students compete to see who can make a gallon of gas go the farthest. The rules state that the entries have at least a 15 mile per hour average speed for 10 miles. PAC-Car II is a joint research project between ETH Zurich and partners from academia and the automotive industry. Their goal was to build a vehicle powered by a hydrogen fuel cell system that uses as small amounts of fuel as possible. PAC-Car II set a new world record in fuel efficient driving during the Shell Ecomarathon in Ladoux (France) on June 26, 2005. Details: Weighing just about 66 lbs, PAC-Car II is equipped with 3 wheels; the single rear wheel is powered and steered, and the front wheels have a camber angle of -8. It does not have a chassis as its body is self-supporting. FEM structure computations have allowed the designers to minimize the amount of material and to choose the best direction for the carbon fibre layers which form the shell of the car.The transmission uses a gear pair. Its speed ratio depends on the track and on the target average speed. It can be open to reduce the drag torque, for instance when the vehicle is gliding downhill.The power supply comes from a hydrogen fuel cell that provides electricity to two electric motors that drive the rear wheel. Prof. Lino Guzzella, the Project Director for Pac II is convinced that some of the ideas that have been generated in this project will eventually show up on the road and contribute to saving fuel and reducing harmful pollutants of passenger cars. Let us hope that his optimism and belief does indeed come true. So all you budding automobile researchers, your Peak 65 is waiting for you to conquer it and save the day!! Start exploring.

WhatsApp Messenger SMS is so last century. Get enough of your friends on WhatsApp it's available for iPhone, BlackBerry and Android, with a beta out for Symbian devices and never have to pay for an SMS again.

No 2 CO2 Going at full throttle is economically and ecologically questionable". By halving its top cruising speed, the shipping giant Maersk has cut both its fuel consumption and greenhouse gases emmissions by 30%

NIT Durgapur

Anveshan
4. If UTM says G is true, then "UTM will never say G is true" is false. If "UTM will never say G is true" is false, then G is false (since G = "UTM will never say G is true"). So if UTM says G is true, then G is in fact false, and UTM has made a false statement. So UTM will never say that G is true, since UTM makes only true statements. 5. We have established that UTM will never say G is true. So "UTM will never say G is true" is in fact a true statement. So G is true (since G = "UTM will never say G is true").

Rishipratim Mazumdar
3rd Year ECE, NIT Durgapur ll our lives, we are taught to pursue the root cause behind every occurrence. Be it any mathematical theorem, any physical phenomenon, any scientific experiment the one thing we are taught to demand before we finally repose our unconditional trust on it, is the proof. A scientific and logically consistent explanation, expressed in terms of some well-known and widely accepted scientific symbols, which validates the phenomenon, occurrence or experimental observation with the help of some well known axioms, is accepted as a proof. For years, mathematicians fretted about the lack of an all encompassing theory or system which provides the basis of all mathematical theorems. In the 1930s, there was a surge of a new idealism among mathematicians and scientists, that of positivism, which believed that only if a fact is proven scientifically and mathematically, can it be considered to be true. It was in the middle of this frantic search for the ultimate theory or system, that the name of Gdel came to the forefront in the world of mathematics. The year was 1931, when a young mathematician named Kurt Gdel put forward an idea so revolutionary and mindboggling, the whole world of mathematics was left wondering about the possible implications it could have. What Gdel formulated was not only applicable to the branch of mathematics; it also affected other branches of study like logic and science. Theorem 1: Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.

6. "I know a truth that UTM can never utter," Gdel says. "I know that G is true. UTM is not truly universal. Godel's genius lay in the fact that for every set of P(UTM) he was able to formulate a complicated polynomial equation that had a solution if and only if G was true. And G, instead of being a vague, trivial statement, was a specific mathematical statement that was designed uniquely for that P(UTM). So, the UTM is unable to verify the truth in this case, and hence, does not live up to its reputation as a Universal Truth Machine. However, as it is with every other controversial theory in mathematics and science, Gdel's theorems of incompleteness have also been misused on a number of instances. Pseudointellectuals with religious inclinations have often demanded that Gdel's theories imply some Truth beyond all truths or Truth beyond human reckoning. They believe that the incompleteness of mathematics and the inability of any mathematical system to be consistent and complete at the same time is an obvious indication towards the presence of an incomprehensible but conscious power that has designed everything. Such conjectures, though supported vociferously by certain factions, are best left to one's personal interpretation about the matter.

Theorem 2: For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent. Basically, the actual implication of these statements lay in the fact that the logic behind these theorems could be extrapolated to the conclusion that 'there are always more truths than can be proven'. For example, any system of numbers will always rest on at least a few assumptions that cannot be proven within that system of numbers. Thus, that system of numbers could never be complete (based on totally provable facts) and consistent at the same time. Another very basic example was the dependence of the fundamental rules of geometry on the five Euclidean postulates. These postulates are the platforms on which the theorems and axioms stand; however, a correct, mathematical proof of these postulates has long eluded mathematicians. All of us are sure that a straight line can be extended infinitely on either side; however, there seems to be no way of proving how and why it happens. The only way out if this was logically arguing that these postulates were reasonable, and hence, could be assumed to be true. Once assumed to be true, they form the basis of geometry. Removing complicated mathematical terms, Gdel's theory stood somewhat like this : Anything that you can draw a circle around cannot explain itself without referring to something which lies outside the circle; something that can only be assumed to be true inside the circle but cannot be proven. Obviously, this statement does imply that it is never possible for a system that is capable of performing mathematical calculations to be sure that each and every one of its statements is true. Or in short, it is impossible to develop a formal mathematical system which could define the behavior of all number systems. And that it is never possible to prove everything.

One of the principal fall outs of these theorems was the realization that artificial intelligence, however advanced it might be, will always suffer from a few shortcomings. The Church Turing thesis, which said that any physical system could express elementary arithmetic operations, and that the arithmetic of a Turing machine (computer) was not provable within the system, and hence, the system was incomplete. As all physical systems which were subjected to measurement are capable of expressing basic arithmetic and the Universe is also capable of expressing elementary forms of arithmetic, it can be logically concluded that like every other physical system and/or Turing machine which could express elementary arithmetic, the Universe was also incomplete. Godel demonstrated his proof with the famous 'Liar's Paradox', where the logical consistency of the statement 'I am lying' is checked. Since it is a self contradictory statement, it can never be proved whether the statement is a true or false. Godel extended this statement to a mathematical form, and hence proved that any statement cannot verify its own consistency. An excerpt from Rucker's Infinity and the mind, which attempts to explain Godel's theorem and how it limits the scope of artificial intelligence is given below: The proof of Gdel's Incompleteness Theorem is so simple, and so sneaky, that it is almost embarrassing to relate. His basic procedure is as follows: Someone introduces Gdel to a UTM, a machine that is supposed to be a Universal Truth Machine, capable of correctly answering any question at all. 1. Gdel asks for the program and the circuit design of the UTM. The program may be complicated, but it can only be finitely long. Call the program P(UTM) for Program of the Universal Truth Machine. 2. Smiling a little, Gdel writes out the following sentence: "The machine constructed on the basis of the program P(UTM) will never say that this sentence is true." Call this sentence G for Gdel. Note that G is equivalent to: "UTM will never say G is true." 3. Now Gdel laughs his high laugh and asks UTM whether G is true or not.

Evernote It's the Remember Everything mantra and the elephantine logo sums it up quite nicely. Take a picture, write a note or record a voice note, and Evernote will make them all searchable on your phone/PC.

No 2 CO2 India's wind power potential is estimated at 45,000 megawatts, about a third of its total energy consumption

NIT Durgapur

Anveshan

Just
Amit Kr. Pandey
3rd Year, ME, NIT Durgapur

it !
ome technologies change aspects of our lives how we work, travel or play. Few alter our whole way of being. It is now a decade or two into the internet revolution and we are still struggling to grasp its vastness. One vital new development is research on the web for information.

V. Madhumati

1st Year, CSE, NIT Durgapur

few days back many wondered that how many from INDIA would respond to Wikipedia cofounder Jimmy Wales' appeal for donations at the top of wiki's page. But, in 2010 India ranked 6 th in the list of countries which donated.

smart phone costs continue to reduce by the day. People do almost everything using a smart phone these days right from emailing, social networking, reading the news, and even shopping is included in this list. It's difficult to think of something which we can't do on a smart phone! Speaking of shopping can be dealt with a very interesting concept. Imagine that every time you shop online, there is a retailer somewhere who donates a percentage of your purchase to a charitable organization of your choice. How would you like that? All you have to do is shop, and by doing this you can donate and support your favourite cause. An online fundraising tool for charities is based on the same principle. It allows charities to capitalize on the online shopping that their supporters already do. Any purchase made using this application will result in a percentage being given back to the charity. Online shopping in India is growing very fast there was a 33% growth in the year 2010, with the online shopping base increasing by 2.5 million. This really simple concept can help nonprofits in increasing their revenue streams, and giving supporters an easy way to donate. It can be very powerful! The Internet seems to be taking over the world right, but this still does not mean that print media is dying out. The Internet provides print media with many extended ways in which to enhance it's reach. Take this for an example: say you have a contest that you are running online. You additionally run an offline ad campaign for this contest using brochures. What if you could give your supporter the power to view the brochure, and immediately take action using their mobile phones? This can be done! QR or Quick Response Codes is the solution. They are small images which you can generate online for free, that link to your website or wherever you want them to link to. You can embed these images in your offline marketing material, to give users an opportunity to immediately access information using their phones. All a user has to do is download a free QR reader on their phones, scan this code, and boom they are redirected to your site. Really, it's all about keeping abreast with technology if not ahead, and devising simple ideas that benefit nonprofits. With more than 65% of India's population being below the age of 35, mobile phones and social media usage will rapidly increase. Are you aware of any charity based mobile/social media campaigns that are ongoing currently?

On the same lines, can you imagine what a simple application on a mobile phone can do for nonprofits? Can provide 3,00,000 meals you think? Plant 1,00,000 trees, maybe? Provide 45,00,000 liters of clean water to developing countries? Yes! Causeworld can do all that and even more. It's an application available on various smart phone platforms that allows a user to 'do good deeds' by just walking into a store. All you have to do is check-in to the store you're at, and every time you check-in you receive some 'karma' points. You can then donate these points to any of the different charities on the Causeworld list. Once you do this, a sponsor company will donate actual money to the chosen charity. Location based services are becoming increasingly popular, and realizing the power of this popularity they have started running charity campaigns that use user check-ins to donate money/food/time etc. to charities. They also rely heavily on the viral sharing capabilities of social media, to raise awareness for causes. A user donates 'karma' points, they then share this with their friends on Facebook, Twitter, etc., who then join the trend, and so the cycle continues. Simple Innovation is the key. With the pace at which technology is moving, you will be astonished to know that by 2013 mobile phones will take over PCs as the most common web access device. And according to an analysis, the mapping and navigation services market in India could have a base of 30 million users by 2013. Currently, the number of Smart Phone users in India is between 8 and 9 million. That might seem small but it is still considerable, and will only continue to grow as

I do not know whether it's making me dumb or smart but it definitely gives me more information. Information about whatever I type in the search bar. The mundane days of going through encyclopaedias and microfiche and searching Skeptics observe that the brain has Googles mission statement from the outset was magazines for photos are gone. developed over hundreds of to organize the world's information and make it Today, students can get the thousands of years. It is unlikely to universally accessible and useful", and the company's information more quickly and be reconfigured in what, in unofficial slogan coined by Google engineer Paul easily which is basically leading evolutionary terms, is a split Buchheit is "Don't be evil". to plagiarizing. Our school second. But the science of how we articles, tuition work, reports etc. turn data into consciousness and memory is too little understood to are all plagiarized, one original author uses his brain and others rule out the idea that the web is having an impact. just take advantage of easy content accessibility on the web. Is Google numbing our brains or making us smarter? Scientists have proven with their experiments that if a muscle is The most controversial statement of this era. It's not only about not used for long, it becomes weak and sometimes nonGoogle , it's about any search engine! It has provided the certainty functional and our human brain is like a muscle, the more you that anything in world could be found with just a few clicks. But the use it, the stronger it becomes. The search engines are basically question here is not what Google has done to the world but to us, depriving us of regular brainstorming. The basic solution to all the problems lies in the fact that we should not let our creativity our brain to be specific. A recent article by Nicholas Carr published in the summer 2009 die out. No wonder we need to rely on Google sometime or the other but let's just say we should rely on it only to get facts. issue is titled "Is Google Making Us Stupid?" Carr's argument is that the ease of checking information online and distractions that Web browsing creates can dumb us down. People just lose their creativity. They depend entirely on Google starting from something as small as knowing the meaning of a word to something as big as writing a presentation. The first idea that pops in our mind is let's just Google it. In response, a survey was conducted of 895 people, including 371 who are considered "experts". That survey notes about threefourths of respondents believe the Internet will make us smarter in the next 10 years. The Internet is fantastic for information. With information cascading around us so fast, individuals have to harness the technology available to keep up. Google makes us lazier but facilitates our learning faster. No 2 CO2 A 1000 km domestic flight on any indian airline creates a footprint of 110 kg of CO2 equivalent. The same journey by rail would create a footprint of 14 kg of CO2 Like it or hate it but in words of Mr Phil Bridgmon- We all live in a Wikipedia/Google world.

So which is it?

DropBox Possibly the easiest way to store, sync, and share files online, DropBox lets you access your free 2GB account on the go, and even download files for offline viewing, say when you're in a flight.

NIT Durgapur

Anveshan

Karn Kaul
4th Year, IT, NIT Durgapur

In a nutshell, if anything is consistent, predictable, accurate and definable, it is related to science, and if it is subjective, not easily definable, it is related to art. As is with almost everything in this world, there's very little that can be classified as purely scientific or artistic. Music, similarly, involves quite a bit of both, though it may seem counter-intuitive. Any form of music has two fundamental parts which come under the science division rhythm, known as taal in Indian music, and melody, known as sur. The theory of melody is massive and endless similar to any subject in science. The reason why this is so is because there is no limit to what can sound good that, Is a typical example of art. The theory of rhythm, on the other hand, is much more scientific, since it's only associated with beats. Rhythm The reason why you can tap your foot continuously along with a song (unless it's classical or progressive) is because it has a constant rhythm. There are two basic components required to fully specify the time signature (rhythm) of a song, which is expressed as x/y (x, y E N) y determines the note value (unit of each beat) while x determines the number of beats in a bar a bar is comprised of x notes, after which it repeats, throughout the piece or until the time signature changes. For example, let's take Jingle Bells try playing the song and tapping your foot to it, then look at the table and try to match it:

Those for whom music is a pastime, running in the background, at max a concert or a discotheque once in a while. These kinds of people usually like music which can either be danced to or that has simple and meaningful lyrics (or both). 2) Those for whom music is an integral part of their lives they pick their songs, build playlists, have favourite artists, maybe even instrumentalists/lyricists. 3) Those who live in a fantasy world where music is more important than water they carry earphones all the time so that 'unwanted' music does not enter their ears and stick in their head, they always have a song running in their head, even while taking an exam, they have strong opinions about artists they listen to (or don't listen to), and regularly get trapped in musical wiki-loops. Statutory Warning : Category (1) may experience dizziness and lethargy on reading ahead. According to Wikipedia, science is defined as an enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the world. An older meaning still in use today is that of Aristotle, for whom scientific knowledge was a body of reliable knowledge that can be logically and rationally explained. On the other hand, art is the product or process of deliberately arranging items (often with symbolic significance) in a way that influences and affects one or more of the senses, emotions and intellect.

1)

here are roughly three categories of people with respect to music:

(Note that the first and eighth notes are the same they're said to be an octave apart) The other five notes can be thought to be as some basic note + or 0.5! C + 0.5 = C# (C sharp) = Db (D flat). There are two basic notes which do not have sharps (as in, E + 0.5 = F instead of E# and the same case with B), hence there are only five additional notes and not seven. A combination of three or more notes (in a particular fashion) is known as a chord. Chords are of three families Major, Minor, and Sustained/Diminished. In general, a Major chord sounds happy whereas a minor one sounds sad, while diminished sounds outright scary. The notes that can be played while a chord plays in the background are those which belong to the scale of that chord. In fact, a general scale contains seven notes, out of which you can make six basic (major/minor) chords all of which will fall under that key. Note: Example, the key of C major contains the notes C, D, E, F, G, A, B. The basic chords which can be constructed using these notes are: C major (obviously), D minor, E minor, F major, G major, A minor. Hence, all these chords are part of the key of C major. The reason why a form of art is tougher than any science is because first you have to spend a lot of time and mental energy learning the technicalities and the science part of it, and once you know them well enough, you're still nowhere, since now comes the tougher part creativity. Learning the science is like learning a language knowing it doesn't mean you can write a book, or for that matter, even a note on Facebook. It's also why you'll never hear a robot or software playing any song in a concert is, because it'll have no life, no feeling, no mood every robot will play that piece in exactly the same way and that's complete loss of what music is about individual expression and subjectivity.

Found!!
Monika Agarwal
2nd Year, ECE, NIT Durgapur alking about aliens fills everyone with anxiety. Who won't be thrilled with even the imagination of having people like us somewhere 'up above the sky'? The scientists are continuously working to find evidences of life forms on MARS or any basic necessity for life-form to exist, since decades. Any minimal suspect of a molecule of H20 being found on MARS, becomes the headline of tomorrow's newspaper all across the world. But, have we fully explored EARTH? No, is the obvious, unwillingly accepted, universal truth. Deep beneath the Atlantic Ocean, are weird chimney like tall structures, with extreme environment and extremes of temperatures. What is astonishing is that, sea crabs and other sea-creatures not only survive but actually depend on such extreme and poisonous biosphere. Oops!! it isn't Poisonous for them. A rare-biosphere with complete absence of oxygen and carbon-dioxide, composed of high density of methane and hydrogen, temperature of 76oC, and with water saltier than salt, is what they CAN'T live without. It is also completely devoid of sunlight- believed to be the source of energy for any ecosystem. In 1970, discovery of BLACK SMOKER amazed the world. It was a place with life at deep sea depths with water boiling at high temperatures, but presence of carbon dioxide made the presence of life comprehensive. But metabolism of creatures living on hydrogen and methane remains a mystery. Unveiling this mystery can unfold mysteries of history of existence of life on EARTH, and strangely also on MARS (methane has recently been discovered in atmosphere of MARS). We say, we are still untouched by the mysteries of the universe, but the fact remains that, earth is still an abyss of exploration for mankind. Lost city covers an area of about 3 lakh sq. kilometers with chimney like structures ranging from 30 to 200 feet height, more surprisingly with huge population. We found one lost city, guess many more awaiting!! Lost City was discovered during a National Science Foundation expedition in 2000 in Atlantic Ocean by UW oceanography professor and paper co-author Deborah Kelley and others.

Here the bar structure is defined on 4 beats (you might consider it to be 8). The time signature for this piece is 4/4 (or 8/8, 16/16, they're technically the same, though for a percussionist they're different in the former case they'll tap 8 times in each bar, whereas in the latter case they'll tap 16 times). Melody There are twelve notes in music theory, which are comprised of 7 basic notes and 5 secondary ones. In fact, the 7 basic notes are fairly well known, either as

GRAVITY 10 One of the most popular twitter apps for Symbian, gravity, features an attractive tabbed interface and the UI is polished and slick.

No 2 CO2 Meat the Monster-One kilogram of mutton production in India causes about the same greenhouse emissions as driving a Maruti 800 for 67 km

11

NIT Durgapur

Anveshan

Sayantan Guha
4th Year, ECE, NIT Durgapur

Bishal Madhab
4th Year, MME, NIT Durgapur

I'm going to make a name for myself. If I fail, you will never hear of me again. Edward Muybridge, ca. 1852 nd, of course, 25 years later, when this young artist finally got to implement this incredibly simple idea of taking still photos of a horse running and playing them one after the other, he had no idea how it would live on forever. For that was how he created the world's first moving picture. A trotting horse. Of course, the world's first proper film, the first movie to tell a story came years later. after Charles Frankin Jenkins invented the projector with the vision of showing moving pictures to an entire audience. The Lumire brothers made full of this invention, holding the first public screening of projected motion pictures in 1895. at Salon Indien du Grand Caf in Paris. It featured ten short films, including their first film, Sortie des Usines Lumire Lyon (Workers Leaving the Lumire Factory). Each film is 17 meters long, which, when hand cranked through a projector, runs approximately 50 seconds. The audience stared in surprise and awe at the pictures and images that walked, talked and narrated stories on screen. Charlie Chaplin and other great actors were carrying the baton forward, but people began to get a bit restless as the days went by. What was the point of watching a set of pictures that spoke nothing? It was always better to watch a play, where the people atleast talked. Pundits began to predict a premature death for this new art form named cinema ( a name it gets from the cinematoscope) when suddenly, Al Jonson spoke up on screen Wait a minute, wait a minute, you ain't heard nothin' yet. Thanks to the Vitaphone, an invention by the legendary Warner Brothers, we got the first talkie. The Jazz Singer in 1927. Raja Harishchandra released in 1913 and became the first full length feature film in India. It was critically acclaimed, and even has an IMDb rating of 8.1! Alam Ara was the first Indian talkie. Technology in cinema began to advance in leaps and bounds. In

1856, Oscar Reijande had made the first special effect in a film, by putting together 32 negatives on a single frame. However, the next big development, came when Kodak struck gold, making use of the simple concept that all colour is nothing but a combination of red, green and blue, to create colour photography. And in came the colour film. No more black and white. That's only for newspapers, wrote a reporter of The Times. All these might appear to be very crude forms of technology compared to what we see today in an average film, but in their days they were things that stunned the world. As did a young man named Walt Disney, the proponent of the animation film. The man who created Snow White and the Seven Dwarfs in 1937 using only hand drawn pictures and nothing else. Actually the history of animation films itself can make an article in itself, so we'll just see the next technological boom that arrived. Special effects : Steven Spielberg's ET, based on a short story written by another great director, Satyajit Ray, made the first big use of special effects in movies, creating the grotesque yet cute alien that everyone thought existed, ad refused to believe that it was just yet another gift of technology to man. From Jurassic Park to The Matrix to Avatar, these make believe effects have come such a long way that with proper software you could create them even on your own computer. And in the meantime, we got 3D. IMAX made use of the age old technology of the polaroid (where you have two images, one corresponding to each eye) and brought about the rebirth of 3 dimensions in cinema. In fact if you go to a movie hall today, there's hi-fi tech stuff everywhere. Dolby sound (so that you feel that the horse of Gandalf is coming from behind you) to 3D (because of which the dream world actually seems to go vertical), you can admire it all. But make sure that you that little man, who in 1877 had thought about taking multiple of shots of a horse. A trotting horse.

here is an everyday increasing demand on energy and equally expanding concern about environmental pollution. Experts around the world are declaring that just like the last decade was information decade, the next one will be green technology decade. One of these green technology endeavours is the electric vehicle industry which is essentially attempting to engender mass adoption of green transport. Such motives stem out from the increasing demand for eradicating pollution and the need to deal away with total oil-dependence. In order to utilise green fuel efficiently, we need to develop automobiles with lower weights. Even in the present scenario of oil-dependent automobile industry, we urgently need to develop light weighing automotive bodies so as to make our rides fuel efficient. In any case, we are in a need of light weighing automotive materials and polymer composites are the answer to such requirement as they are very low in weight with no compromise in strength. To prove themselves as embodiment of fuel conservation and high mileage, different automobile companies are already using polymer composites as their automobile body materials. As a materials engineering undergraduate, I was always interested in exploring the latest advancements in this field and fortunately enough, I succeeded in availing the life-time opportunity of doing research on polymer composites at the University of Michigan-Dearborn as my summer internship after my third year of engineering. While starting to talk about my foreign internship, I must confide to all the readers that the process of availing one in my case was really very exciting and, at times, very frustrating. It started with Facebook chats with my senior friends at IIT-Kharagpur who were flaunting their being selected for internships at different foreign institutes. This was when I was in my second year and was not aware if an NITian can really expect an international internship within his or her undergraduate life. Well, it came to me eventually, that if you are an NIT-D undergraduate, it's pretty easy for you to have one. When I completed my second year of engineering, I came to know that our institute had sent around 17 students to foreign countries after third year for internships. This was like great news to me and for the whole next year; it was in my mind each and every second that I too must be one of them. So, I started sending e-mails to institutes and universities all around the globe. It was really hectic, but full of fun as I felt as if I am roaming around the world while simply checking the websites of different universities. I was able to convince one of the universities in Germany though my stay out there would not be fully sponsored. Finally it was during the second week of March, 2010 that a notice was pasted on the mess door of my hostel declaring that NIT-D was sending some students to the University of Michigan for a fully

sponsored summer research program. That was a day of determination for me and the next few days, I worked really hard to avail the position and guess what, I was selected! For anyone obsessed with automobiles, Detroit happens to be the Mecca. Popularly known as the world's traditional automotive centre, "Detroit" is a metonym for the American automobile industry and an important source of popular music legacies celebrated by the city's two familiar nicknames, the Motor City and Motown. For me, automobile is the greatest invention ever made by human beings and a few days stay in the world`s auto-hub city, surrounded by automobile manufacturing giants such as Ford, Chrysler and General Motors would be equivalent to attaining nirvana. Those 45 days stay in Detroit was a lifetime experience for me full of grandeur, hopes and of course as we are talking about Detroit, Eminem`s rap songs. I was assigned the work of studying the effect of using a chemical known as a coupling agent on the mechanical properties of a polymer composite. In easy words, a composite is basically an engineered material with two or more constituent phases, one phase being the matrix in which all the other phases are reinforced in. A composite has improved properties than its constituent phases. When the matrix is a polymer, it is termed as a polymer composite. My initial task was to prepare an E-Glass reinforced Nylon-6 composite prepreg with properties complying with the benchmark provided by my supervisor based on some preceding research on the topic. After I succeeded in attaining the benchmark set, I had to use the coupling agent during the manufacturing process and study its effect on various mechanical properties of the material.This part of the work was purely original research work. The final products of the research project were composite laminates with significantly improved properties, much to the delight of my supervisor as well as me. The entire work was performed at the Center for Lightweighing Automotive Materials and Processing (CLAMP), University of Michigan-Dearborn. Today is the age of innovation. Anveshan is the right term to describe what we today need to do in order to make the world a better place. Research is a word most of us generally see as something scary and boring. That is a huge misconception because finding the unknown has always fascinated the human psyche. My sincere advice to my junior batches is to explore the technological breakthroughs in his or her field of interest and engage in academic research projects since after second year of engineering. As said by the new Audi campaign, Advanced is the mind that does not stop at excellence. All the best!!!

Nokia Photo Browser 12 This is one impressive photo browser, adding pleasing effects and finger friendly navigation to the photo browsing experience. The apps also features face recognition, allowing you to swipe from face to face in a group photo.

No 2 CO2 One night of air-conditioner use (1-ton AC suitable for a small sized room) results in approximately the same Carbon Footprint in just eight hours as driving a Maruti 800 for 85 kilometers

13

NIT Durgapur
How to do it? There are many ways of applying the cryptographic techniques. Following are the ways that are evolved till date. Substitution Cipher In this type, given a sequence of letters (symbols), without altering the sequence, each letter(s) ( symbol(s)) is replaced by another group of letter(s) (symbol(s)). The first cipher was substitution cipher called Caesar Cipher. In this technique, each letter is replaced by a letter that is known to be x positions away (circularly from a to z) from the letter in the alphabet. This x is the key for the cipher. For example, if ABCZ is the plaintext and if 4 is the key then the ciphertext will be EFGD. To decrypt the ciphertext, the receiver need to replace each letter with the letter x positions before (circularly). Transposition Cipher In this type, given a sequence of letters (or symbols), which are further grouped into say k letters (or symbols), each of these groups is replaced by one of the permutations (of all letters of the group at a time). Thus, the order of the letters (or symbols) is changed. The nature of replacing permutation is the key for the cipher. For example, if following is a plaintext: It's 12PM at night and we are still studying. We cannot help ourselves because tomorrow is Mathematics class test and there are five chapters to cover. And if the letters are grouped in 3 then we have the following: (It')(s 1)(2PM)( at)( ni)(ght)( an)(d w)(e a) (re )(sti)(ll )(stu)(dyi)(ng.)( We)( ca)(nno) (t h)(elp)( ou)(rse)(lve)(s b)(eca)(use)( to)(mor)(row) ( is)( Ma)(the)(mat)(ics)( cl)(ass) ( te)(st )(and)( th)(ere)( ar)(e f)(ive)( ch)(apt)(ers) ( to)( co)(ver.) Now we reverse each of the groups and we get the following ciphertext. 'tI1 sMP2ta in thgna w da e erits llutsiyd.gneW ac onnh tpleuo esrevlb saceesuot romworsi aM ehttamscilc ssaet tsdnaht erera f eevihc tpasreot oc rev . Another way is columnar transposition cipher. In this technique, the plaintext (assuming they are writen from left to right) are filled up in a column of m rows and n columns, starting from 1st row and 1st column. Once the column is filled up, the part-ciphertext is generated by writing the columns first i.e. the first column is written followed by second column and so on. This process is repeated till the whole plaintext is covered and hence the final ciphertext is generated. Polyalphabetic Cipher Above techniques uses a key that does not change till the processing of the whole plaintext. When the key itself becomes function of position (in the plaintext), then the technique is called polyalphabetic cipher. Following is simple polyalphabetic cipher. Let the plaintext be: For twenty days Kolkata was getting fried with hot sun rays. The extreme heat created a large low pressure trough near coastal areas of Bay of Bengal. The pressure is so low in that region that it is enough to create a large cyclone which can have wind speed above 150 kilometres per hour

Anveshan
We replace the first letter by the letter (circularly) next to it, second letter by letter two positions away (circularly), third letter by the letter (circularly) three positions away and repeating the procedure until end of the plaintext is reached. This will give the following ciphertext. gqu xwfpwc dbav oommdxa xcv keuvlrg gtlid xkwl hpv vyn scbw. tig hbtsgpi hfcw grfcwid b ndvgf nra psgvwusg wvovik rebt fsatvdp asgdw og ddc og dhrgbn. wle qthwsvth ms tq osw jp wlau thkipp wlau kw ms fprygi vr grfcwi a mcuke dafpoog zlidj fen icyi wjpg wpfgg ebpxh 150 oimqpitsgv tes jryr. Another example of polyalphabetic cipher is Trithemius cipher. All the above described techniques have one common feature. The key used for encryption is also used for decryption thus they are symmetric key ciphers. Thus on the basis of the type of key used in ciphers we have following classification. 1. Symmetric key ciphers 2. Asymmetric key ciphers One of the famous symmetric key ciphers is DES cipher. DES is substitution cipher, transposition cipher & also a polyalphabetic cipher. DES cipher is divided into 16 stages (called rounds) and each round uses a key that is derived from the given key. An asymmetric key cipher uses one key for encryption and another key for decryption. RSA is an asymmetric key cipher. It does so in the following way. First it generates two large primes say p and q. The product n =p.q is found out. Also (n)=(p-1)(q-1) is found out. A number e is found out such that e is co-prime to (n) and e< (n). e will act as an encryption key and is called public key. Another number d is found out such that d is co-prime to (n), d< (n) and if the product d.e is divided by (n) the remainder will be 1 (written as d.e. mod (n) = 1 mod (n)). d acts a decryption key and is called private key. The plaintext is broken into groups such that value of each group, say m, is such that 0<m<n and the encrypted message z = me mod n. For decryption the relationship m= zd mod n is used. Another classification is used for distinguishing cipher on the basis of how much plaintext the cipher can operate at a time. This leads us to following classification : 1. Block ciphers. 2. Stream ciphers. A block cipher can take at least 2 units of plaintext at a time. Typically this can be 8 characters. A stream cipher on the other hand can operate 1 unit of plaintext at a time. Can it be broken? The goodness of a cryptographic scheme is judged by how hard it is to break the cipher and how hard it is to find the key. The art of breaking a cipher (popularly known as code breaking) lies in cryptanalysis. It had already started with Al-Kindi and the art continued to evolve during the period of Muslim Golden Age and Renaissance but the technique was limited to the same art of frequency analysis (and interestingly this technique is still valid to some extent). Following were (are) the techniques used: 1. Ciphertext-only attack. In this technique, the breaker has only some ciphertexts and he/she need to derive the plaintext and the key used in the cipher and the ciphertext. The Enigma was broken using this method. 2. Known plaintext attack. In this type of attack, the breaker has a set of plaintexts and the corresponding set of ciphertexts (possibly cipher too), thus trying to determine the transformation (or the cipher or the key or both). For example, let us consider the following ciphertext. gqu xwfpwc dbav oommdxa xcv keuvlrg gtlid xkwl hpv vyn scbw. tig hbtsgpi hfcw grfcwid b ndvgf nra psgvwusg wvovik rebt fsatvdp asgdw og ddc og dhrgbn. wle qthwsvth ms tq osw jp wlau
15

Avik Mitra
Alumnus, Class of 2007, NIT Durgapur

What is it? Cryptography is the science and art of information hiding in text (texts) such that the text (texts) containing the hidden information is exposed and easily available. This is not data hiding, which prohibits the exposure of the text containing the data. The techniques that are used in cryptography are called cryptographic techniques. A cryptographic technique consist of sequence of steps that when followed will implement that particular technique. The sequence of steps that when terminate and give output or outputs, is called an algorithm and the algorithm that accomplishes a cryptographic technique is called a cipher. The text on which a cipher is applied is called plaintext and the output of a cipher is called a ciphertext. In simple terms, a cipher works in the following way. It reads each letter (sequence of letters) from plaintext and replaces each of the letters by another letter or any other symbol(s). For cipher to work, it needs, apart from a plaintext, a key which assists in algorithm to produce ciphertext. The process of converting a plaintext to ciphertext is called encryption and inverse, that is, conversion of ciphertext to plaintext is called decryption. Thus, altering plaintext, key or cipher will change the ciphertext. The elegancy in cryptography lies in the fact that what you see is bogus, but what it can reveal is precious! Each time we talk over cellular phone, each time we use ATM card, each time net-banking is used, each time we use HTTPS to connect to Google (the encrypted.google.com appearing in Google Chrome), each time you enter a password to get into e-mail account, each time you enter password to get into your computer, you are using cryptographic techniques, the ciphers and you are using the password as the second input (the key). The field of cryptography has now become diverse (yet with simple and small core) and it is now applied to nearly every field of communication. When it all started? The timeline of cryptography starts from Julius Caesar. Caesar Cipher and its variants were used in Roman Military during his period. It was Al-Kindi (801-873 A.D), Arab mathematician, who was the first to do cryptanalysis (one who analyses cryptographic techniques). He was motivated by observation of texts in the Holy Quran and invented the technique of breaking a cipher by frequency of occurrence of the letters in the plaintext and hence laid the way of deriving a cipher by observing a plaintext and its corresponding ciphertext. The table shows frequency of occurrence of letters in English alphabet. The contribution was so great that mathematician and experts of cryptography acknowledge it as the most fundamental and influential work till World War-II.
14

Leon Battista Alberti (1404-1472), invented a new technique for devising ciphers, called polyalphabetic cipher. In this technique, the key for the cipher is varied each time the cipher is applied to a letter. During Renaissance, the application of cipher, and cryptanalysis gained importance but the advancement of effectiveness of ciphers did not contribute much and not until 19th century, Charles Babbage (1791-1871), Friedrich Kasiski (1805-1881) and Edgar Allan Poe(1809-1849),who contributed and published their work on cryptanalysis. During 1917, Gilbert Vernam (1890-1960) proposed construction of ciphertext creating machine and thus led the way of development of electro-mechanical devices that implemented the cipher in hardware, which were used widely in World War-II. Claude E Shannon, in his paper Communication Theory of Secrecy Systems (1949) founded a solid theoretical basis for cryptanalysis. However, most of the work in and after 1950s was hidden by government agencies and did not come to public until 1970s. In 1975, Data Encryption Standard (DES) draft was made public (Soviet Union was using GOST cipher but it didn't become public until 1998). Apart from contribution of mathematics in cryptanalysis, each of the cipher till 1976, were of same type (called symmetric key cipher). Encryption and decryption process involved the use of same key. In 1978 a new type of cipher, called asymmetric key cipher was developed, by Ron Rivest, Adi Shamir and Leonard Adleman. The cipher was called RSA. The cipher used one key for encryption and another key for decryption and is based on number theory and modular arithmetic. At present, numerous ciphers exist and so as their detailed cryptanalysis. Now cryptography has become interesting field in the academia and uses many mathematically diverse techniques.

NIT Durgapur
thkipp wlau kw ms fprygi vr grfcwi a mcuke dafpoog zlidj fen icyi wjpg wpfgg ebpxh 150 oimqpitsgv tes jryr. The corresponding plaintext is: For twenty days Kolkata was getting fried with hot sun rays. The extreme heat created a large low pressure trough near coastal areas of Bay of Bengal. The pressure is so low in that region that it is enough to create a large cyclone which can have wind speed above 150 kilometres per hour. Note that first letter 'f' is replaced by 'g' (letter known to be next to 'f'), second letter 'o' is replaced by 'q' (letter known to be two positions away from 'o), third letter 'r' is replaced by 'u' (letter known to be three positions away from 'r') and so on. However, the sixth letter 'e' is replaced by 'f' (letter known to be next to 'e') and again repeating the pattern. We also see that the spaces and digits are unchanged. Thus, following is the transformation used. Read the current letter (let it denoted by L) and its position (let it be denoted by P) in the text (neglecting anything other than letters). Find remainder R when P is divided by 5. Replace L by the letter known to be cyclically R positions away (if L is 'z' and R is 1 then replace 'z' by 'a'). The key used here is 5. 3. Chosen-plaintext attack. In this case, the breaker can know the cipher but not the key. The breaker according to context can choose arbitrary plaintext (obviously based on certain properties that the plaintext can hold) and apply the cipher to get the corresponding ciphertext. Thus, there is a chance that the key can be derived. The asymmetric key ciphers, which have a known public key, can be susceptible to chosen plaintext attack. 4. Chosen ciphertext attack. In this case, breaker according to context can choose arbitrary ciphertext (again based on certain property that the ciphertext holds) and can apply the reverse of the cipher (the decryption process) and tries to get the corresponding plaintext. The breaker tries to guess the key. 5. Adaptive chosen-plaintext attack. This is a special case of chosen-plaintext attack where the breaker chooses plaintext after gaining experience in choosing previous plaintext. 6. Adaptive chosen-ciphertext attack. This is a special case of chosen ciphertext attack where the breaker chooses next ciphertext based on experience gained on previous attempt to break the cipher. 7. Related-key attack. In this type of attack, the breaker can obtain two or more ciphertexts that resulted due to different keys. The relationships between the keys are known. Note that the above techniques state nothing about the details of how a key and plaintext can be derived. The techniques saywhat are used and what property or properties that these chosen entities can have. The details of how these entities function; like one or more ciphertexts or, one or more plaintexts are performed by differential cryptanalysis and linear cryptanalysis. In differential cryptanalysis it is assumed that the cipher is known. Given a key and given at least two plaintexts (difference between the plaintexts, say, number of positions by which they differ, is known), the difference resulting between the texts at each step of the cipher is observed. It is by this observation the key can be derived, possibly. For example, in case of DES (when implemented in computer), it is observed that if two plaintexts
16

Anveshan
differ by 1 bit the output from each round of DES can differ by 6 bits and this difference increases and can be of more than 30 bits when final output comes. Also if the keys differ by 1 bit nearly the same effect can be observed. In linear cryptanalysis the observer tries to approximate the cipher by a set of linear equations, number of which should be very large. Typically following is the aim. Which bits of a plaintext and which bits of a ciphertext can give the same result when bitwise XORed as obtained bitwise XORing of some definite bits of the key. Mathematically, it is expressed as:

+ +

+ +

Karn Kaul
4th Year, IT, NIT Durgapur

Where, p[i] is i-th bit plaintext. c[w] is w-th bit of ciphertext. k[s] is s-th bit of key. After running successive text over plaintext, the DES cipher has already been broken (in the year 1998). Now the standard is replaced by Advanced Encryption Standard (AES) that uses 128-bit key. For RSA, if the public key having size of 428 bits is used then it can be broken. ciphertext pair the probability that the above equation holds, can be estimated. So what to do? In response to increasing processing power (Moore's Law) and decreasing cost (cost becomes half in every 18 months for same processing power), following are the thumb rules for considering any cryptographic scheme: 1. Increasing block-size will make cryptanalysis difficult but at the cost of processing time. 2. Increasing key-size will make cryptanalysis difficult but at the cost of processing time. 3. Asymmetric key ciphers are more secure than symmetric cipher but the processing time for symmetric key ciphers are lesser. 4. If symmetric key ciphers are used, increasing number of rounds makes cryptanalysis difficult but at the cost of processing time during encryption and decryption process. 5. If the ciphers are used for communication over telecommunication or computer network then it is recommended that the key is changed frequently. (This explains why the users are requested to change password in an e-mail account). 6. I f c i p h e r s a r e u s e d f o r c o m m u n i c a t i o n o v e r telecommunication or computer network then symmetric ciphers can be used but the keys should be encrypted using asymmetric key ciphers. 7. If possible use multiple ciphers in stages. 8. If possible change the used cipher after finite period of time. 9. Use the above combinations to make the system secure. Epilogue Cryptography has evolved for 2000 years. Most of the events mentioned in this article glorify the lands outside India. I could not find any historical document that gives an account of cryptography in ancient India. But I believe India was ahead of the other parts because from the Vedic age, India was much developed in theory of numbers and right from that age Indians were able to deal with large numbers that were much ahead of what was achieved by the Greeks at that same time. If we can deal with numbers then we can deal with its transformations also. I request the readers to discover what lies hidden till date.

et me put forth a few phrases first : l Beam me up, Scotty. l There is no spoon. l If it fell from a plane, what happened to the plane? l All those moments will be lost in time like tears in rain Time to die. l E.T. phone home l Do. Or do not. There is no 'try'.

Fair warning: if not even one of these lines seem familiar to you, I'd suggest moving on and setting your eyes elsewhere. Science fiction can be a techie's paradise, if executed well. The first rule when dealing with science fiction is a very fundamental one don't break those laws of science which are known to exist. Unfortunately, there are many instances of films doing exactly that breaking that rule and hence, spoiling the experience which, otherwise would've been a memorable one. Since most science fiction films deal with space, let me state some of those flaws: 1. Sound in Space: Armageddon, Deep Impact, Mission to Mars, Serenity, Sunshine, Star Trek, Star Wars, etc. 2. F i r e s / e x p l o s i o n s i n S p a c e : Armageddon, Deep Impact, Star Wars, Serenity, Star Trek, etc. 3. Slow-Motion in Zero-Gravity: 2001A Space Odyssey, Mission to Mars, Star Trek, Space Cowboys, etc. 4. Uber fast Space-Ships Dodging Lasers: Star Wars, The Black Hole, The Last Starfighter, Star Trek etc. 5. FTL (Faster Than Light) Travel: Almost every film. Note: Some films have even shown ship sensors (radar) showing ships that are travelling faster than light pray tell me, what sensor agent are they using that can catch up to an FTL object and return back?! 6. Easy Communication with Aliens: Alien, Contact, Star Trek, Stargate, Star Wars, The Last Starfighter 7. g = 9.81 m/s2 Everywhere: Serenity, Solaris, Star Trek, Stargate, Alien, The Last Starfighter, Star Wars

As you see, Star Wars and Star Trek fare quite badly along with some other well-known films. There are a few films though, which deal with space and also have a clean sheet Apollo 13 being one. On this note, there are a few films which may not have dealt with outer-space simulation, but are worth mentioning for some reason or the other: 1. Blade Runner (1982): Despite doing very poorly at the time of its release, Blade Runner is widely regarded as one of the best films ever made, is a cult classic and is a leading example o f t h e neo-noir genre. The film carefully examines humanity, the philosophy of religion, moral implications of genetic engineering, while leaving several ends open to audience interpretation. 2. 2001: A Space Odyssey (1968): One of Stanley Kubrick's brilliant films, arguably his masterpiece, this movie was filmed in 1968 and has depicted alien life, suspended animation, computers, spacecraft, zero-gravity, absence of sound in space, microgravity, delays in communication between distant planets/spacecraft, etc so accurately that it is considered by many critics the most thoroughly researched science fiction film with respect to aerospace engineering even though more than 40 years have passed since the film was made. 3. Twelve Monkeys (1995): This is one of the rare films that actually uses the concept of time travel and does not mess it up with paradoxes and inconsistencies. That apart, the film aptly studies the subjective nature of memories and their effect upon perceptions of reality.

No 2 CO2 According to projections made by the Indira Gandhi Institute of Development Research, around 7 million people could be displaced along coastal India if global temperatures rise by a mere 2C

17

NIT Durgapur

Anveshan

WICKETS ON-FIELD CALL

ragon Age II is the latest Role Playing Game (RPG) from software powerhouse BioWare and a follow up to the widely acclaimed Dragon Age: Origins. Role-playing games are usually defined not by playing a role, but by character creation and the player's ability to create any kind of character they wish. Characters' moral alignment affected conversation options and vice versa. As the genre grew other elements were added - like your characters' backstories and these were also implemented into the gameplay in different ways. And herein lies the appeal of RPGs- Flexibility and Replayability. RPGs give players an attachment to their character, as well as adding to the level of replayability - since going through as a cunning thief would yield a slightly different story to your first run through as an evil mage. You will be playing Dragon Age 2 as Hawke, who unlike the protagonist in Origins is a fully voiced champion. Fans of Origins are likely to feel a twinge of disappointment when starting off though. You can no longer select a race for Hawke. The game forces you to play as a human. You can only choose a gender and a class for your characters (one of mage, rogue or warrior). The characters who join you fail to make as strong an impression on you as in Origins. Having said that, they do tend to grow on you. As their story arcs develop over the course of the game, various subtle nuances do emerge in their complex personalities which makes the characters steadily more endearing. This being Dragon Age 2, you would think, in true sequel fashion, it would pick up right where Origins left off. But it doesn't. Instead, it actually begins during the first game. From there, the game spans 10 years, with time jumps marked by cinematics between chapters of the story. Allowing it to overlap the first game's storyline in this way gives the writers an interesting chance to re-tell certain events from the original story from a different perspective. This framed narrative opens up many interesting possibilities such as bias and exaggeration in the narrative. This narrative style, unfortunately, limits the branching storylines and hard-hitting choices the original was famous for. All this results in a semi linear story, which is disappointing. Dragon Age 2 feels more like a prologue towards the finale of the trilogy. Despite all this, the plot is very entertaining and chock-full of crazy moments. What is unique about the Dragon Age universe is its convoluted

Review

IMPACT OUTSIDE

ORIGINAL DECISION

NOT OUT
IND REVIEW

Ankit Bhargava
3rd Year, ME, NIT Durgapur

PITCHING OUTSIDE OFF

R. P. Singh
4th Year, BT, NIT Durgapur

sense of morality and the wide reaching consequences of the decisions you have to make .BioWare realize that and have not fiddled with these mechanics. None of the choices that you will have to make can be categorized as good or bad but will have some consequence on the gameworld- some of them subtle, others not so much.

ricket is unarguably the most dignified game ever played, also termed as Gentlemen's game. And in gentlemen's game, mostly it is a question of moral strength of players to uphold its integrity. But technology, as is the habit of it, has continuously wreaked havoc on most humble of sports, and Cricket

The style of fighting has changed. There are now multiple ways to play the game. For Origins fans, the intense tactical strategy is still available. Nothing has changed there. But if you would rather play Dragon Age 2 as an action RPG, that is also an option. You can also choose to do a little of both. The cooldown times have been shortened and animations have been sped up, giving more immediacy to your actions adding to the fluency of the combat experience. The combat also looks positively dazzling, with colorful spells lighting up the screen and giant ogres lumbering about.Add to that the AI improvements and we have a massively satisfying combat experience on our hands here. The user interface, though, is largely unaltered. Changes that have been made have mostly been made to streamline things. Dialogue scenes will now have a dialogue wheel, just like in Mass Effect, which is something that tends to be a little more intuitive and immersive than a drop down menu, and different dialogue options will have different personality types assigned to them. The writing and voice acting, always the BioWare forte, is spectacular to say the least. Each character is brilliantly voiced over, the standout being a dwarf named Varric. The characters often spontaneously break out into conversations amongst themselves which is often hilarious and will always make you stop to listen to them. Overall, Dragon Age II feels more like a reboot than a sequel. Different characters, the Kirkwall setting and the general simplification to the formula make the game significantly different and this isn't a direct follow up on Origins. Also the semi-linear story does seem like a step backwards. However, the game still manages to draw you in, mainly because of the power of choice and the rippling effect it send throughout the universe. While the game-play is far from perfect, Dragon Age II is an absolute must for anyone interested in the world, the lore and a good story.

"I think the adulteration of technology with human intention was the reason why we didn't get that wicket.
MS Dhoni recently saw some major changes which haven't really been well administered. Here we refer to UDRS, the culprit in many on-field debacles. The Hawk-eye is a system to track the path of a ball using four to eight high frame rate cameras. These cameras capture the ball frame to frame from the point of release to the point of impact. Judicious placement of these cameras ensures 360 degree coverage of the pitch. The Hawk Eye uses the principles of triangulation - determining the location of a point by measuring angles to it from known points. The data is processed and visualized in the 3-D. The margin of error is minimal. The cameras are usually placed at long-on, long-off, the straightfield, and on the two sides of the wicket. The side cameras capture the point of release, trajectory and height. The straight cameras track the line. The calibration of cameras is extremely crucial. Virtual data is extrapolated from the actual data gathered from frame-by-frame tracking of the ball by the high-speed cameras. Then to measure the height of the ball on different surfaces, Pitchto-Pitch variation is provided. Data collected from over 400 matches played over various pitches in varying conditions. This information is fed through a processor using an algorithm to predict the trajectory of the ball.

But practically, the precision is limited to 250 cm before impact. After that, the technology becomes less reliable. This was the reason that players were allowed to stay at the wicket even when Hawk-eye showed the ball clearly hitting the stumps. The evidence from past data has shown that the chances of an error increase if the point of impact is more than 2.5 metres. In most systems, there may be a point from where the information may not be completely accurate. Another supplementary technique is the Hot Spot technology. It provides reliable information on the ball's contact, with any part of the bat, pads, gloves, or the body using Infrared spectrum. The Infrared cameras are positioned on opposite ends of the ground. Upon impact, friction is generated which causes the point of impact to become bright. This however is a very expensive technology and probably for the same reason has not been implemented in Cricket World Cup 2011 in UDRS alongside Hawk-eye. Additional measure that can be used to aide UDRS is Snickometer. Tracking of audio waves to Snick-ometer helps in determining

"It cost us the match. The technology is supposed to eradicate mistakes, but in this case it didn't" Porterfield
those fateful edges. But its fundamental limitation is that the device has to be used in proper synchronisation with the video in a particular frame, which is difficult given the point-size precision required and the huge size of cricket field. This too has been missing from this edition of world cup, although amid much criticism.

Swype 18 Swype lets you bolt on a replacement QWERTY keypad, all you do is to swipe your finger from first letter and across the others, and the traced word is typed for you.

No 2 CO2 The Carbon Footprint generated by a single roundtrip flight between India and USA / Canada overshoots the annual footprint of an average Indian by 62%

19

NIT Durgapur

Anveshan

Shashank Garg
4th Year, ECE, NIT Durgapur

Vasu I
3rd Year, IT, NIT Durgapur

ince time unknown (well not exactly very ancient times) mankind's two concerns with Internet connection have been the speed and mobility. Wires have never been able to limit the flight of imagination, and so in due course of time, out of air, mankind produced wi-fi to gather mobility. Next came the question, was portability as important as to compromise the data-speeds? And even then, 50 metres didn't really broaden your horizons, did it? Well, it's time you got to know about WiMAX. WiMAX stands for Worldwide Interoperability for Microwave Access. It is a telecommunications protocol that provides fixed and mobile Internet access. Currently, it can provide up to 40 Mbit/s while IEEE 802.16m update is expected to offer up to 1 Gbit/s fixed speeds! Imagine, huh! WiMAX is sometimes referred to as "Wi-Fi on steroids", it is meant to provide interoperability among IEEE 802.16 standards along with reverse compatibility with IEEE 802.11 standard wi-fi. It can be used for a number of applications including broadband connections, cellular backhaul, hotspots, etc. It is similar to Wi-Fi but it can also permit usage at much greater distances. It is designed to provide broad coverage like cellular network instead of small Wi-Fi hotspots. Additionally, given the relatively low cost to deploy a WiMAX network (in comparison to GSM, DSL or Fiber-Optic), it is now possible to provide broadband in places where it might have been previously economically unviable. Line of sight not needed between user and base station. However non line of sight will reduce the distance between user and base station. Frequency bands used are 2 to 11 GHz and 10 to 66 GHz (licensed and unlicensed). IEEE 802.16 defines both MAC and PHY layers, allows multiple PHY-layer specifications. WIMAX vs Wi-Fi The broadband internet access has biggest limitation of last mile solution and higher data rate. The presently available technology

like Wi-Fi does not provide sufficient band width coverage is very limited roaming, backhaul; interface and security are also its limitation. WiMax has been evolved takes care all these limitations. The coverage area of one site is very large ~30kms as compared to Wi-Fi which requires 650 access points to cover 10 sq km area. The bandwidth of 70 mbps is good enough to cater hundreds of home user. Roaming and mobility is available, security features are better than Wi-Fi. The WiMax standards offers a great deal of design flexibility including support for licensed and license-exempted frequency bands, channels width ranging from 1.5mhz to 20 mhz per connections quality of service(QOS) and strong security primitives. Network Scale The smallest scale network is a personal area network (PAN). A PAN allows devices to communicate with each other over short distances. Bluetooth is the best example of a PAN. The next step up is a local area network(LAN). A LAN allows devices to share information, but is limited to a fairly small central area, such as a company's headquarters, a coffee shop or your house. WIMAX is the wireless solution for the next step up in the scale, the metropolitan area network (MAN). Indian Scenario of WiMax Rollout In India, WebSky has created a joint venture with World Wide Wireless India (WWWI) to design, build and run a network that could address 75m people. WebSky will provide and will construct the system while WWWI will contribute its licensed frequencies in 3.5 GHz spectrum, which cover nine large cities, including Mumbai and Ludhiana in the Punjab. The partners will jointly operate the network and share he revenues. Also in India, telecom giants, BSNL has announced his plans to roll out WiMax and Wi-Fi services in 10 major cities.

ouTube, started in 2 0 0 5 , revolutionized the video on the web. By May 2010, there were over 2 billion views per day with 24 hrs. of video uploaded per minute ! Between 2007-2008 every major broadcaster (re)introduced new online v i d e o l i b ra r i e s w i t h customized players to help monetize and track. Now, the leading social networking site Facebook is grabbing more attention by video sharing. Streaming media enables real-time or on-demand access to audio, video and multimedia content via the Internet or the intranet. Streaming media is transmitted by a specialized media server application, and is processed and played back by a client player application, as it is received, leaving behind no residual copy of the content on the receiving device. Streaming media is the convergence of broadcast and rich media, empowering both content providers and audiences with a whole new world of choices. This article describes a stream using Flash Media Server. The Adobe Flash Media Server (FMS) provides the rich media delivery platform of choice that reaches more people, more securely and efficiently, than any other technology. From usergenerated content to movies and television shows to corporate training, FMS offers enterprise-level solutions to deliver content and communications. Flash Media Server communicates with its clients using the Adobe patented RTMP over Transmission Control Protocol (TCP), which manages a two-way connection, allowing the server to send and receive video, audio, and data between client and server. In Flash Media Server 3.5, you also have the option to use stronger stream security with encrypted RTMP (RTMPE). RTMPE is easy to deploy and faster than using Secure Socket Layer (SSL) for stream encryption. RTMPE is just one of the robust new security features in Flash Media Server 3.5. Flash Media Server has both a server-side and a client-side architecture. The client experience is deployed as a SWF or AIR file,

created in either Flash or Flex. Clients run within a web browser (Flash Player), mobile device (Flash Lite 3), or as a desktop application (Adobe AIR). A client could also be another FMS or licensed third-party technology that can stream or communicate with FMS. The server manages client connections and security, reads and writes to the server's file system, and performs other tasks. The client is the initiator of the connection to the server. When connected, the client can communicate with the server and with other connected clients. Clients connect to instances of applications; for example, a chat application may have many rooms. Each room is an instance of the chat application. Multiple instances of an application can run simultaneously. Each application instance has its own unique name and provides unique resources to its connected clients. When the video is played, the video file at first is downloaded to the user's hard drive and then playback starts. The video begins to play when enough of it is downloaded to the user's hard drive. The file is served from a standard web server through an HTTP request, just like a normal web page or any other downloadable document. Happy Streaming!

Joiku Spot 20 Oldie, but a goldie. Years before android 2.2 gave users to make a WI-FI hotspot from their phone's data connection, JoikuSpot had solved the problem for Nokia phones.

No 2 CO2 125 million people in Indian coastal region could be rendered homeless by the end of the century, owing to sea-level rise and accompanying drought precipitated by Global Warming.

21

NIT Durgapur

Anveshan

` 1,

6,0

00000

SCAM

00 00
Atri Roy
4th Year, IT, NIT Durgapur

Sima Kumari
3rd Year, CE, NIT Durgapur magnet on top of the temple, one in the basement and four large magnets in the interior to make the statue hang, float and suspend in air. The question remains though is how the magnets retained their effect over time as all magnetic materials lose their magnetic strength over time. A sculpture of Surya the Sun God at Konark Brihadeeswarar Temple , at Thanjavur in Tamil Nadu, is the world's first complete granite temple. The 'Vimana' or the temple tower is 216 ft high and is among the tallest of its kind in the world. It is widely believed that the shadow of its gopuram never falls on the ground. Why is the garbhagriha so airy? The circumambulation winds around the massive lingam in the garbhagriha presents the idea that the Chola Empire freely offered access to the gods. Golconda Fort, situated in Hyderabad, captures some interesting pages of myths and legends. A hand clap at a certain point below the dome at the Fateh Darwaza reverberates and can be heard clearly at the 'Bala Hisar' pavilion, the highest point, almost a kilometre away, depicting characteristic acoustic effect. This worked as a warning note to the royals in case of an attack.It is believed that there is a secret underground tunnel that leads from the "Durbar Hall" and ends in one of the palaces at the foot of the hill. It is also believed that there was a secret tunnel connecting it to the Charminar.

The Losses The Comptroller and Auditor General, India said that the 2G spectrum allotment is presumed to have caused a revenue loss of up to ` 1.76 lakh crore. The telecom ministry was responsible for undervaluing 2G spectrum, sold to new players in 2008, and it's been argued that the allotment price was not realistic. What generally happens Basically, spectrum is a state owned resource. Much like land. It can be leased or, as was case with 3G, sold to major telecom operators so that they can operate within their bands without interference from other operators. The frequency used for 2G operation is 800-900 MHz band in which, for each cell of a cellular division or circle, an operator gets a band of frequencies which it distributes further into channels and provides service to its various users. If a licensee corporation sells its license (including the spectrum) to another person, the licensee shares a part of the premium/profit gained by it through the sale with the government. The major operators like Airtel, Tata, Vodafone, etc., buy a part of total available band from the govt. of a country and pay a huge price for it. This is a fair game usually, where operators bid for the bands, and it's a simple auction. The govt. may thus get the best price for this resource. For instance, the selling of 3G bands brought in ` 1,06,000 crore for the country, in contrast to ` 35,000 crore as was initially expected by the telecom ministry. What happened! In January 2008, the 9 firms that were issued licences, collectively paid the Ministry of Communications and Information Technology's telecommunications division ` 10,772 crore for licensing of 2G bands. Entry fee for spectrum licenses in 2008 was decided at 2001 prices even though mobile subscriber base had shot up to 350 million in 2008 from 4 million in 2001! Additionally, cut-off date for applications advanced by a week, to facilitate the interest groups. Also licenses were issued on a first-come-first-served basis. No proper auction process was followed, no bids were invited! TRAI had recommended auctioning of spectrum at market rates, but Raja ignored advice of TRAI, Law Ministry, Finance Ministry! And, quite incredibly, the corporations got licences with prior experiences of telecom operation or, in case of Swan Telecom, without even meeting the eligibility criteria. Credits: The selling of the licenses brought attention to four groups of entities - politicians who had the authority to sell licenses, bureaucrats who implemented and influenced policy decisions,

corporations who were buying the licenses, and media professionals who mediated between the politicians and the corporations on behalf of one or the other interest group. Directors: The list included Unitech Group, Swan Telecom, Loop Mobile, Videocon Telecommunications Limited, S Tel, Reliance Communications, Sistema Shyam Mobile (MTS), Tata Communications, Vodafone Essar, Dishnet Wireless, Allianz Infra. Unitech group a real estate company entering the telecom industry with its 2G bid; sold 60% of its company stake at huge profit to Telenor after buying licensing and Swan Telecom sold 45% of its company stake at huge profit to Emirates Telecommunications Corp. (Etisalat) after buying licensing. The govt. did not receive any part of profit from these transactions. Actors : A. Raja, the Ex-Minister of Communications and IT, Manmohan Singh, Prime Minister of India from heading the ruling UPA government led by the Congress, accused of not acting on removing Raja, M. Karunanidhi, the CM of Tamilnadu and the DMK chief, Arun Shourie, the minister for Telecom during 2003, Pramod Mahajan, the minister for Telecom between 1999 and 2003 and Kanimozhi MP and daughter of DMK chief Karunanidhi. Media partners : Nira Radia, a former airline entrepreneur turned corporate lobbyist whose conversations with politicians and corporate entities were recorded by the government authorities and leaked creating the Nira Radia tapes controversy, Barkha Dutt, an NDTV journalist alleged to have lobbied for A. Raja's appointment as minister and Vir Sanghvi, a Hindustan Times editor alleged to have edited articles to reduce blame in the Nira Radia tapes.

ndian architecture has ushered in many open avenues for a lot of research. Be it the beautiful temples at Khajuraho or the exotic structures at the Ajanta and Ellora caves, Indian architecture has always been awe inspiring. It has evolved over space and time and has been affected by numerous invaders who have brought different styles from their motherlands. But it is an unavoidable fact that certain expressions tend to get magnified and others reduced when set against the vast canvas of the world. The delicate patterns used in the structure not only reveal the concentration on precision and design but also incorporate some myths and some interesting facts as expressions of different periods. This article deals with some of these marvels of history which stunned the 21st century with its splendid features. The Sun Temple, at Konark, constructed from oxidizing and weathered ferruginous sandstone dedicated to the Sun god Surya by King Narasimhadeva I in 13th century, it is one of the finest combinations of contemporary art and architecture . Significance of chakra (wheel) of Konark temple. The spokes of the wheels are so placed that they serve as sun dials and the shadows cast by these can give the precise time of the day. The Lodestone Legend Legends describe a lodestone on the top of the Sun temple that due to its magnetic effects, vessels passing through the Konark Sea were drawn to it, resulting in heavy damage. Moreover the magnetic effect of the lodestone disturbed ships compasses so that they did not function correctly. To save their shipping, the Portuguese voyagers took away the lodestone, which was acting as the central stone and keeping all the stones, temple wall and the iron columns together in balance. Due to its displacement, the temple walls lost their balance and eventually fell down. The controversial majesty Legend describes that the image of the deity or the statue of the Sun God was built of a material with iron content with one large

Google Goggles 22 It does visual searches-you take a picture and then the app try to tell you what you have shot. Works for books, landmarks, logos and visiting cards, and new capabilities are bolted on every now and then.

No 2 CO2 Ten litres of orange juice needs a litre of diesel fuel for processing and transport.

23

NIT Durgapur

Anveshan

Ayan Mukherjee Rishipratim Mazumdar


3rd Year, ECE, NIT Durgapur

Prashant Kr. Chhetri


3rd Year, BT, NIT Durgapur

ere we take a look at some of the weirdest inventions that have been patented. The fact that the idea for such inventions could even be thought about is simply mind boggling. The point to focus on is not that most of these inventions totally absurd, but the fact that people took such pains to conceptualize the whole idea and then get them patented. Let us have a look at the inventions of these eccentric 'geniuses'. Banana Suitcase No one likes bruises on their bananas, so our fruity inventor came up with the Banana Suitcase, a protective case devised to protect your banana from the dangers lurking in the world outside their thick skins. The inventor's instructions: "In use, the user opens the container and places a banana inside thereof and closes the container to allow the user to carry the banana in a safe manner so that it remains fresh and is protected from becoming bruised." Solo-Operable Seesaw Believe it or not, the real inspiration behind this invention was the schoolyard bully!!! The i nv e n t o r m a ke s i t perfectly clear that the see-saw, for most an innocent yet essential playground equipment, can be converted to a lethal weapon by a devious mind. In his patent, he mentions: A seesaw plank can be a dangerous object in the hands of a mischievous child, who may abruptly pull down on one end when another child is passing by the opposite end, causing the opposite end to rise quickly and potentially striking the passing child. As the seesaw operates on the principle of counterbalancing weights, injury can result if a rider suddenly falls or jumps off the seesaw while the opposing rider is high in

the air, particularly if one rider is substantially heavier than the other. In this scenario, the opposing rider is sent crashing to the ground and the sudden impact may jar a child's joints or cause spine or tailbone injuries." Motorized Ice-cream Introducing the Motorized Ice Cream Cone, designed to delight any child or the inner child within your tongue. The inventor explains;" it expands the typical act of eating an ice cream cone to include numerous playful and creative possibilities including sculpting and carving of channels with one's tongue to form interesting shapes and patterns on the outer surface of an ice cream portion". In other words, it's OK to play with your food. Typically the Motorized Ice Cream Cone spins but it can also rotate, vibrate and agitate. The Finger Brush Brushing more and enjoying it less? What you need is a novel approach and we have just what the doctor, umm dentist ordered. This handy little toothbrush is designed to give you more control and sensitivity when brushing your teeth. Since you have nerve endings in your fingertips you can now feel your way around sensitive areas of your mouth. The rubbery, bushy thing on the tip of your finger connects to the resilient elastic handle so that, it does not fall off. You say you're not clumsy just a bit sloppy? Never fear, the little circular bulge at the base of the handle is designed to keep saliva, water and toothpaste from running down your wrist and into your sleeve. That's great news except now, what will you do with a palm full of foamy spittle.

he GFAJ-1 is a recently discovered rod shaped bacterial species which gained worldwide popularity in the December of 2010. It falls in the class of organisms called extremophiles. The GFAJ-1 was isolated in the saline lakes of California in the US by a NASA research team headed by Felisa Wolfe-Simon, a renowned astro-biologist. So it gets the name as Give Felisa A Job, funny name though. The discovery is a breakthrough in the ongoing research on microbes showcasing amazingly varied and extreme features. It is believed that the microbe when starved of phosphorus is c a p a b l e o f incorporating the elements like arsenic in its proteins and lipids as well as in its DNA and RNA. However the mechanism of this transformation is still unknown. The main inference Mono Lakes, California,US which comes from the discovery of strains of this microbe is that extraterrestrial life forms may have a different biochemical makeup as that of our planet earth. Researchers from NASA started the isolation of this

bacterium from the highly saline Mono Lakes in California in 2009. The lake also has the highest concentrations of arsenic in the world. The discovery of a microorganism that allegedly can use arsenic to build some of its cellular components has implications in the area of astrobiology. Some astrobiologists speculate that this could suggest that life can form in the absence of large amounts of available phosphorus, thus increasing the probability of finding traces of life elsewhere in the universe. The finding gives weight to the long-standing idea that life on other planets may have a chemical makeup differing from that of known organisms in fundamental ways, and may help in the search for extraterrestrial life. It also suggests a different model of evolution in which the organism used arsenic in place of phosphorus in arsenic rich environments. Thus a totally different way of origin of life is speculated. However some researchers doubt that arsenate has replaced the phosphate in the DNA of this organism. They suggest that under test conditions in the laboratories the trace contaminants were sufficient enough to provide phosphorus for the cell's DNA. They believe that the arsenic goes to some other place in the cell body. A Harvard microbiologist believes that arsenic containing macromolecules are unstable in water. Whatever the case may we, this recent discovery has turned the heads of several researchers towards itself. Who knows, the key to the mystery of life lies within the closet of these saline lakes, GFAJ being one of them?

Microscopic picture of GFAJ-1 bacteria.

Photoshop Expres 24 The mobile app is easy to use, unlike its desktop counterpart, and lets you do simple crops, colour corrections and apply one touch effects to the photos.

No 2 CO2 The Indira Gandhi Institute of Development Research has reported that the shifting of growing seasons for major crops such as rice would reduce yields by up to 40%

25

NIT Durgapur

which these computers operate. The end result is that they are put together in a completely different way. Quantum computers have been built on the small scale and work continues to upgrade them to more practical models. How a traditional Computer Works Computers function by storing data in a binary number format, which result in a series of 1s & 0s retained in electronic components such as transistors. Each component of computer memory is called a bit and can be manipulated through the steps of Boolean logic so that the bits change, based upon the algorithms applied by the computer program, between the 1 and 0 modes (sometimes referred to as "on" and "off").

Anveshan
In quantum computing, a qubit or quantum bit is a unit of quantum informationthe quantum analogue of the classical bitwith additional dimensions associated to the quantum properties of a physical atom. The physical construction of a quantum computer is itself an arrangement of entangled atoms (Quantum entanglement is a property of the quantum mechanical state of a system containing two or more objects, where the objects that make up the system are linked in such a way that the quantum state of any of them cannot be adequately described without full mention of the others, even if the individual objects are spatially separated Albert Einstein once famously described it as spooky action at a distance), and the qubit represents both the state memory and the state of entanglement in a system. A quantum computation is performed by initializing a system of qubits with a quantum algorithm "initialization" here referring to some advanced physical process that puts the system into an entangled state. To understand this let us consider this example -Think of a qubit as an electron in a magnetic field. The electron's spin may be either in alignment with the field, which is known as a spin-up state, or opposite to the field, which is known as a spin-down state. Changing the electron's spin from one state to another is achieved by using a pulse of energy, such as from a laser - let's say that we use 1 unit of laser energy. But what if we only use half a unit of laser energy and completely isolate the particle from all external influences? According to quantum law, the particle then enters a superposition of states, in which it behaves as if it were in both states simultaneously. Each qubit utilized could take a superposition of both 0 and 1 giving the following 4 as results: a 1/1 result, a 1/0 result, a 0/1 result, and a 0/0 result Advantages of qubits over bytes? Each byte in a normal computer can only be one of 0 or 1 and nothing else. No matter how many bytes you have, each computer at a single point in time can only occupy one combination of these bytes in order for the programming to actually work. Quanta on the other hand has more possibilities. Because it deals with superposition, it means that the different positions can be occupied simultaneously. For example - a 2-bit register in an ordinary computer can store only one of eight binary configurations (000,001, 010, 011, 100, 101, 110 or 111) at any given time, whereas a 3-qubit register in a quantum computer can store all eight numbers simultaneously, because each qubit represents two values. 2nd Year, ECE, NIT Durgapur

Three-qubit register can represent 8 classical states simultaneously. With classical computers gradually approaching their limit, the quantum computer promises to deliver a new level of computational power. With them comes a whole new theory of computation that incorporates the strange effects of quantum mechanics and considers every physical object to be some kind of quantum computer. A quantum computer thus has the theoretical capability of simulating any finite physical system and may even hold the key to creating an artificially intelligent computer. A quantum computer can factor the numbers in a reasonable small period of time. This enables it to perform calculations on a far greater order of magnitude than traditional computers, a concept which has serious concerns and applications in the realm of cryptography & encryption. Some fear that a successful & practical quantum computer would devastate the world's financial system by ripping through their computer security encryptions, which are based on factoring large numbers that literally cannot be cracked by traditional computers within the life span of the universe. With computers replacing humans on every front, and advanced technology like quantum computers, science will surely take a quantum leap into the future.

n 1982, the Nobel prize-winning physicist Richard Feynman thought up the idea of a 'quantum computer', a computer that uses the effects of quantum mechanics to its advantage. For some time, the notion of a quantum computer was primarily of theoretical interest only, but recent developments have brought the idea to everybody's attention. One such development was the invention of an algorithm to factor large numbers on a quantum computer, by Peter Shor (Bell Laboratories). By using this algorithm, a quantum computer would be able to crack codes much more quickly than any ordinary (or classical) computer could. In fact a quantum computer capable of performing Shor's algorithm would be able to break current cryptography techniques in a matter of seconds. With the motivation provided by this algorithm, the topic of quantum computing has gathered momentum and researchers around the world are racing to be the first to create a practical quantum computer. So, after that quizzical introduction, lets have a small sneak into the queer world of qubits and try to understand some of the complexities of the quantum world. The first question that arises in our mind after hearing the word 'Quantum Computers' is What does quantum physics have to do with computing? How can a quantum computer work and what makes it different from a traditional computer? A quantum computer is a computer design which uses the principles of quantum physics to increase the computational power beyond what is attainable by a traditional computer. Quantum computers are not that different from normal computers outwardly, but they are in the sense that quantum theory is the basis on

Lets now try to understand how a Quantum Computer Would Work A quantum computer, on the other hand, would store information as either a 1, 0, or a quantum superposition of the two states. The key to this is the quantum bit, or qubit, which is not limited to representing 0 or 1. Qubits exist in fuzzy states that are both 0 and 1, and can be combined to represent many numbers at once . As a result, a quantum computer would be like a massively parallel computer array, whose power grows exponentially with each additional qubit.Such a "quantum bit," called a qubit, allows for far greater flexibility than the binary system. What is Qubit?

Sristi Agarwal

Instapaper 26 Install it, and whenever you find something you would like to read but don't have time for the moment, just click on the Read Later button.

Super-fast quantum computers could soon be a reality

27

NIT Durgapur
4G is coming - Are we ready ?
l Telecom Regulatory Authority of India (TRAI), Highlights of telecom subscription data as on 31st l l l l

Ayan Mukherjee
3rd Year, ECE, NIT Durgapur phrenological reading complete with printout. As shown, the skull is divided into the following sectors based on the organs lying underneath. The larger or more pronounced that section is the more prominent the corresponding associated characteristics are. Although, the deductions made using phrenology do not always yield the correct results (which accounts for it being a pseudoscience), the fact that such a study could ever be even thought of is amazing. Graphology attempts to study the characteristics of a person by analysing his/ her handwriting. It is based on the assertion that the human 'ego' is active but not always to the same degree when writing. The human mind is most active when a sincere effort is being made by the person to write something; it hibernates when the writing gains a sort of momentum or flow. The muscular movements made while writing are controlled by the central nervous system and therefore effects such as emotion, mental state and bio-mechanical factors such as muscle stiffness and elasticity are reflected in a person's writing. The main peculiarities of a handwriting sample that come under the scanner are direction of lines, space between lines, the slant, the pressure applied on the pen and the paper and the margins left. Below is a sample of Winston Churchill's handwriting. The significantly large spacing between words is indicative of the fact that he was blessed with originality of concept, and new and logical arguments. However, unlike in the case of Isaac Newton, he was not a creative genius because of the uniformity of spacing and letter characteristics (creative geniuses are usually methodically uneven which is reflected even in their writing).

January 2011, Press Release, no. 13/2011, Mar. 2011 teliasonera4g.com H. Gobjuka, 4G wireless networks: opportunities and challenges, Technical Report, no. VZ-TRG1005309, Jul. 2009 COAI, Pan India - Mobile Number Portability Launch - 20th Jan 2011, Press Release, Jan. 2011 Effects of cell phone radiofrequency signal exposure on brain glucose metabolism, The Journal of the American Medical Association, 2011

Peak 65 : Survey-Oct'07: ASME The Completeness of Godels Incompleteness Theorem : Cosmic Foot Prints Online Karma : Cause world mobile apps. Just Google It: Survey - Future of the Internet UDRS : PTI, AP Wimax: WIMAX Forum, Wiki, BSNL.co.in Stream Live: NPK Adobe Systems Incorporation Incredible India: Myths & Facts-Indian Architecture Quantum Computing : Spectrum Weird In-vain-shuns : totallyabsurd.com

cience has always tried to rationalise, to provide reason to that which was earlier beyond our comprehension. The horizons of science have known no bounds and have endeavoured to conquer newer territories. The human mind, always so unpredictable and shrouded in mystery is an area which has always mystified science. Nonetheless, there have been many efforts engineered by the human mind in order to know it better. There is a wide array of studies based on a strange cocktail of reason, belief and assumption- all varied in their nature but united in the quest to masquerade themselves as sciences, which earned them the name pseudoscience. Phrenology and graphology are two interesting areas of study that devised logic for unveiling the secrets of the human mind and the human character but ended up in this category. Phrenology is the study of the structure of the skull to determine a person's character and mental stability. This pseudoscience is based on the false assumption that mental faculties are located in brain "organs" on the surface of the brain and can be detected by visible inspection of the skull. The basis of the study stems from the fact that brain organs which were used got bigger and those which were not used shrunk, causing the skull to rise and fall with organ development. These bumps and indentations on the skull reflect specific areas of the brain that determine a person's emotional and intellectual functions. Phrenology gave rise to the invention of the psychograph by Lavery and White, a machine which could do a

references

For Anveshan Quiz, visit: www.tinyurl.com/anveshanQ

With Regards

Team MNTC
Final Year : Shashank, Atri, Arpit, Karn, Sayantan R.P., Bharadwaj, Jayita, Dyuti

Third Year : Amit, Ankit, Atikant, Anoop, Argha Astha, Ayan, Chhetri, Madhvi Rishi, Sarawagi, Sima

Second Year : Apoorva, Divyam, Durgesh, Harsh Portia, Prabisha, Rahul, Ritesh Shadab, Shatadal, Sristi, Tirtha

Snaptu 28 A do-it-all app for Nokia phones, Snaptu is a Java application that boundless up a number of web services like Flickr, Facebook, news Piacasa etc into a single home screen.

www.mathsntech.in

You might also like