Tuesday, November 24, 2009

Explore Images with Google Image Swirl



Earlier this week, we announced the Labs launch of Google Image Swirl, an experimental search tool that organizes image-search results. We hope to take this opportunity to explain some of the research underlying this feature, and why it is an important area of focus for computer vision research at Google.

As the Web becomes more "visual," it is important for Google to go beyond traditional text and hyperlink analysis to unlock the information stored in the image pixels. If our search algorithms can understand the content of images and organize search results accordingly, we can provide users with a more engaging and useful image-search experience.

Google Image Swirl represents a concrete step towards reaching that goal. It looks at the pixel values of the top search results and organizes and presents them in visually distinctive groups. For example, in ambiguous queries such as "jaguar," Image Swirl separates the top search results into categories such as jaguar the animal and jaguar the brand of car. The top-level groups are further divided into collections of subgroups, allowing users to explore a broad set of visual concepts associated with the query, such as the front view of a Jaguar car or Eiffel Tower at night or from a distance. This is a distinct departure from the way images are ranked by the Google Similar Images, which excels at finding images very visually similar to the query image.



No matter how much work goes into engineering image and text features to represent the content of images, there will always be errors and inconsistencies. Sometimes two images share many visual or text features, but have little real-world connection. In other cases, objects that look similar to the human eye may appear drastically different to computer vision algorithms. Most difficult of all, the system has to work at Web Scale -- it must cover a large fraction of query traffic, and handle ambiguities and inconsistencies in the quality of information extracted from Web images.

In Google Image Swirl, we address this set of challenges by organizing all available information about an image set into a pairwise similarity graph, and applying novel graph-analysis algorithms to discover higher-order similarity and category information from this graph. Given the high dimensionality of image features and the noise in the data, it can be difficult to train a monolithic categorization engine that can generalize across all queries. In contrast, image similarities need only be defined for similar enough objects and trained with limited sets of data. Also, invariance to certain transformations or typical intra-class variation can be built into the perceptual similarity function. Different features or similarity functions may be selected, or learned, for different types of queries or image contents. Given a robust set of similarity functions, one can generate a graph (nodes are images and edges are similarity values) and apply graph analysis algorithms to infer similarities and categorical relationships that are not immediately obvious. In this work, we combined multiple sources of similarity such as those used in Google Similar Images, landmark recognition, Picasa's face recognition, anchor text similarity, and category-instance relationships between keywords similar to that in WordNet. It is a continuation of our prior effort [paper] to rank images based on visual similarity.

As with any practical application of computer vision techniques, there are a number of ad hoc details which are critical to the success of the system but are scientifically less interesting. One important direction of our future work will be to generalize some of the heuristics present in the system to make them more robust, while at the same time making the algorithm easier to analyze and evaluate against existing state-of-the-art methods. We hope that this work will lead to further research in the area of content-based image organization and look forward to your feedback.

UPDATE:  Due to the shutdown of Google Labs, this service is longer active.

Monday, November 23, 2009

Cara Mudah dalam Mengaktifkan Print Spooler

ditulis oleh Basyarah
print-spoolerDalam setiap kegiatan, manusia mengharapkan kemudahan dalam melaksanakannya. Salah satu kegiatan tersebut adalah membuat dokumen. Bayangkan pada saat zaman mesin ketik, salah ketik satu huruf saja harus mengganti kertas, namun setelah adanya komputer semuanya terasa lebih mudah.
Terkait dengan dokumen, salah satu alat untuk mencetak atau printer sudah sangat dibutuhkan, dimana sudah berbagai macam jenis printer seperti laser jet, bubble jet, dot matrix, dan jenis lainnya. Namun dalam pelaksanaannya dibalik kemudahan itu selalu ada kesulitan yang dapat kita temukan. Misalnya tidak bisa mencetak dokumen. Banyak masalah yang harus di analisa terlebih dahulu, dan tidak instant dapat menebak kesalahannya.
Nah, dalam tulisan ini saya membahas salah satu masalah kenapa driver printer tidak dapat di install. Sebetulnya caranya mudah namun jika belum menemukan solusinya akan terasa lebih sulit. Pada saat anda meng-install driver printer namun tidak bisa, dimana terdapat pesan error “Operation could not be completed. The print spooler service is not running” maka terdapat permasalahan pada service print spooler.
Apa itu print spooler?
Print spooler adalah suatu service di windows yang berfungsi untuk memuat file atau dokumen ke memory untuk di cetak. Jika service ini tidak dalam kondisi aktif maka proses mencetak tidak dapat dilaksanakan, termasuk menginstall printer driver. Sekarang anda mungkin mempunyai pertanyaan seperti ini “bagaimana cara mengaktifkan print spooler ini?” sebetunya caranya sangat mudah yaitu:

  1. buka control panel >> Administrative Tools >> services
  2. cari print spooler kemudian klik dua kali
  3. set startup type ke “Automatic”
  4. klik tombol “Start”
Maksunya apa ya startup type itu?
Untuk pemula memang terdapat istilah-istilah komputer yang membingungkan, tapi ini memang harus diketahui. Kamu dapat melihat tulisan saya mengenai startup ini.

Saturday, November 14, 2009

The 50th Symposium on Foundations of Computer Science (FOCS)



The 50th Annual Symposium on Foundations of Computer Science (FOCS) was held a couple of weeks ago in Atlanta. This conference (along with STOC and SODA) is one of the the major venues for recent advances in algorithm design and computational complexity. Computation is now a major ingredient of almost any field of science, without which many of the recent achievements would not have happened (e.g., Human Genome decoding). As the 50th anniversary of FOCS, this event was a landmark in the history of foundations of computer science. Below, we give a quick report of some highlights from this event and our research contribution:
  • In a special one-day workshop before the conference, four pioneer researchers of theoretical computer science talked about historical, contemporary, and future research directions. Richard Karp gave an interesting survey on "Great Algorithms," where he discussed algorithms such as the simplex method for linear programming and fast matrix multiplication; he gave examples of algorithms with high impact on our daily lives, as well as algorithms that changed our way of thinking about computation. As an example of an algorithm with great impact on our lives, he gave the PageRank algorithm designed by Larry and Sergey at Google. Mihalis Yannakakis discussed the recent impact of studying game theory and equilibria from a computational perspective and discussed the relationships between the complexity classes PLS, FIXP, and PPAD. In particular he discussed completeness of computing pure and mixed Nash equilibria for PLS, and for FIXP and PPAD respectively. Noga Alon gave a technical talk about efficient routing on expander graphs, and presented a clever combinatorial algorithm to route demand between multiple pairs of nodes in an online fashion. Finally, Manuel Blum gave an entertaining and mind-stimulating talk about the potential contribution of computer science to the study of human consciousness, educating the community on the notion of "Global Workspace Theory."
  • The conference program included papers in areas related to algorithm and data structure design, approximation and optimization, computational complexity, learning theory, cryptography, quantum computing, and computational economics. The best student paper awards went to Alexander Shrstov and Jonah Sherman for their papers "The intersection of two halfspaces has high threshold degree" and "Breaking the multicommodity flow barrier for O(sqrt(log n))-approximations to sparsest cut." The program included many interesting results like the polynomial-time smoothed analysis of the k-means clustering algorithm (by David Arthur, Bodo Manthey and Heiko Roeglin), and a stronger version of Azuma's concentration inequality used to show optimal bin-packing bounds (by Ravi Kannan). The former paper studies a variant of the well-known k-means algorithm that works well in practice, but whose worst-case running time can be exponential. By analyzing this algorithm in the smoothed analysis framework, the paper gives a new explanation for the success of the k-means algorithm in practice.
  • We presented our recent result about online stochastic matching in which we improve the approximation factor of computing the maximum cardinality matching in an online stochastic setting. The original motivation for this work is online ad allocation which was discussed in a previous blog post. In this algorithm, using our prior on the input (or our historical stochastic information), we compute two disjoint solutions to an instance that we expect to happen; then online, we try one solution first, and if it fails, we try the the other solution. The algorithm is inspired by the idea of "power of two choices," which has proved useful in online load balancing and congestion control. Using this method, we improve the worst-case guarantee of the online algorithm past the notorious barrier of 1-1/e. We hope that employing this idea and our technique for online stochastic optimization will find other applications in related stochastic resource allocation problems.
The FOCS conference (along with STOC and SODA) has been the birthplace for many popular data structures and efficient algorithms, with far-reaching applications. Many researchers and engineers at Google are trained in these research communities, and apply these techniques whenever possible. Google researchers will continue to contribute and learn from these conferences.

Friday, November 13, 2009

A 2x Faster Web



Cross-posted with the Chromium Blog.

Today we'd like to share with the web community information about SPDY, pronounced "SPeeDY", an early-stage research project that is part of our effort to make the web faster. SPDY is at its core an application-layer protocol for transporting content over the web. It is designed specifically for minimizing latency through features such as multiplexed streams, request prioritization and HTTP header compression.

We started working on SPDY while exploring ways to optimize the way browsers and servers communicate. Today, web clients and servers speak HTTP. HTTP is an elegantly simple protocol that emerged as a web standard in 1996 after a series of experiments. HTTP has served the web incredibly well. We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support.

So far we have only tested SPDY in lab conditions. The initial results are very encouraging: when we download the top 25 websites over simulated home network connections, we see a significant improvement in performance - pages loaded up to 55% faster. There is still a lot of work we need to do to evaluate the performance of SPDY in real-world conditions. However, we believe that we have reached the stage where our small team could benefit from the active participation, feedback and assistance of the web community.

For those of you who would like to learn more and hopefully contribute to our experiment, we invite you to review our early stage documentation, look at our current code and provide feedback through the Chromium Google Group.

Tuesday, November 10, 2009

Earn Money By Reading SMS

SMS marketing is increasing very rapidly in India. Anybody can earn just by receiving advertisement through SMS. Now a days lots of companies are in this field but only few of them are authentic and really pays.

mGinger :This is one of the best and oldest Indian "paid to read SMS" website. This site is still operational and growing at a very fast pace.mGinger till now paid 1.32 crore to its users.


 Earn money for reading sms on your mobile.
Get SMS ads of only those products that you want to buy.
Get ads at your convenience.
Get number of ads as many as you want.
Save money through discount coupons and offers.

  • Earn much more money by inviting family and friends
            Get 20 paisa for every ad you receive
            Get 10 paisa for every ad your friends receive
            Get 5 paisa for every ad your friend's friends receive
  • Refer your friends and get Rs 2 on each valid referral:
        Invite as many friends as you can to your mGinger network and get Rs 2 added to your             mGinger earnings on each valid referral. Just to remind you again, you will also keep earning          money with each incoming SMS that your friends receive.

Payment Proof :

 
 CLICK HERE TO JOIN mGinger OR VISIT WWW.mGinger.COM



YOUMINT : Youmint is just like mginger. It is also one of the good site to earn some extra money just by reading SMS.Over 30,00,000 have joined Youmint and this number is increasing day by day.YouMint pays you to invite friends and to receive SMS promos! It might just pay your Mobile bill.



 

Tuesday, November 3, 2009

Google Search by Voice Learns Mandarin Chinese



Google Search by Voice was released more than one year ago as a feature of Google Mobile App, our downloadable application for smartphones. Its performance has been improving consistently and it now understands not only US English, but also UK, Australian, and Indian-English accents. However, this is far from Google's goal to find information and make it easily accessible in any language.

So, almost one year ago a team of researchers and engineers at Google's offices in Bangalore, Beijing, Mountain View, and New York decided we had to fix this problem. Our next question was, which should be our first language to address beyond English? We could have chosen many languages. The decision wasn't easy, but once we looked carefully at demographics and internet populations the choice was clear--we decided to work on Mandarin.

Mandarin is a fascinating language. Over this year we have learned about the differences between traditional and simplified Chinese, tonal characteristics in Chinese, pinyin representations of Chinese characters, sandhi rules, the different accents and languages in China, unicode representations of Chinese character sets...the list goes on and on. It has been a fascinating journey. The conclusion of all this work is today's launch of Mandarin Voice Search, as a part of Google Mobile App for Nokia s60 phones. Google Mobile App places a Google search widget on your Nokia phone's home screen, allowing you to quickly search by voice or by typing.



This is a first version of Mandarin search by voice and it is rough around the edges. It might not work very well if you have a strong southern Chinese accent for example, but we will continue working to improve it. The more you use it, the more it will improve, so please use it and send us your comments. And stay tuned for more languages. We know a lot of people speak neither English nor Mandarin!

To try Mandarin search by voice, download the new version of Google Mobile App on your Nokia S60 phone by visiting m.google.com from your phone's browser.