2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 800 times in 2015. If it were a cable car, it would take about 13 trips to carry that many people.

Click here to see the complete report.

Advertisements

Charge your devices Wirelessly!

During the recently concluded CES 2015, there were some products that tried to offer new ways of charging wirelessly, and one of those is called WattUp.

Instead of charging your smartphone or tablet or e-reader by plugging it on a wall socket or placing it on a charging mat, WattUp is wall-mounted and serves as a kind of “router” that can charge devices up to 15 feet away. The makers say it can charge up to 12 devices simultaneously, but of course the more you charge, the less power it can distribute. Four devices that are 15 feet away will be able to receive 1W of power but as you go closer, you will get more. But if you have all 12 gadgets within the field of power, they will receive just .25W each.

To manage power distribution, WattUp also has an app to help you out. You can set it up so that your smartphone or tablet will only charge when it has reached a certain percentage so that it will not automatically charge whenever you get into the range of the device. This kind of wireless charger can be pretty handy at the office if you don’t want employees constantly plugging their devices on your wall sockets. But if you have a lot of gadgets and family members at home, then it can also be handy (or wallsy).

WattUp is still not available for retail consumers, but Foxconn and Haier have already expressed interest in manufacturing the device or maybe using the technology for some other device. Let’s wait and see if this will be a new trend in wireless charging.

Future of “Big Data”

If you’ve never heard of big data before, you might be interested to know you’re a part of it. Big data is a term used to describe the unbelievable growth and accessibility of data and information on the Internet. The amount of information we put online everyday is astonishing. Think about it, there are millions and billions of people using the Internet everyday, and if each one put out just a little bit of information, that number would grow exponentially in no time.

According to the infographic, big data has almost quadrupled in the last three years, with no signs of slowing down. The good news is that hardware costs are lessening as software and services are becoming responsible for big data revenue.

The term ‘big data’ also encompasses all of the complex data sets – whether they’re structured or not – floating around the Internet.

 

future-big-data-market-640x2836

What’s newly added features in Xamarin

Xamarin

Xamarin, the company with more than 750,000 mobile developers delivering mission-critical enterprise and consumer apps, today announced major expansions to their product lineup that radically improve how developers build, test and manage apps.

At the company’s global developer conference, Xamarin Evolve 2014, the largest cross-platform mobile development event in the world, the company introduced Xamarin Insights and the Xamarin Android Player, along with new features for the recently launched testing service, Xamarin Test Cloud, and new features for the Xamarin mobile development platform. These announcements are the realization of the company’s mission to make it fast, easy and fun to build great mobile apps.

Effectively delivering quality apps is no easy feat in today’s highly complex mobile development landscape, with multiple versions of various operating systems, an incredibly divergent variety of hardware sizes and capabilities, and users with extremely high expectations who will quickly abandon slow apps with poor user experiences.

With today’s announcements, Xamarin provides developers a mobile-first, fully integrated and seamless experience which simplifies and accelerates every stage of the application development lifecycle.

“Our enterprise customers look to Avanade to help them envision what is possible for their current and future mobile needs,” said Dan O’Hara, Avanade vice president of mobility. “Xamarin delivers unmatched technology that allows Avanade to transform these mobile strategies into successful reality for our customers.”

New Xamarin Platform Capabilities for Building Mobile Apps

  • Xamarin Android Player – Android developers waste countless hours fighting slow emulator performance and long startup times when deploying and testing apps. Xamarin Android Player provides developers dramatically shorter start up times and the best possible emulator performance through hardware-virtualization and hardware accelerated graphics. The Player makes it easy to test and demo hardware features such as the ability to simulate low battery conditions and set GPS coordinates, and Xamarin will soon release the ability to simulate the back and front-facing camera.The Xamarin Android Player is available as a preview release.
  • Sketches – The Xamarin Platform now includes an easy, lightweight way for developers to explore iOS and Android APIs in C# and F#. From inside the IDE, developers can create Sketches that show their code executing in real-time. New mobile developers using C# gain a powerful, yet simple, way to explore iOS and Android, while experienced Xamarin developers now have a fast way to iterate on features and explore new APIs without the overhead of building and running a project.Sketches are available today for iOS, Android, and Mac as a preview in the Xamarin Studio Beta Channel, and are coming soon to Visual Studio for Android, iOS and Windows.

New Mobile App Testing and Monitoring Capabilities

  • Xamarin Insights – Xamarin Insights is a new app monitoring service that tracks app crashes and exceptions, and helps developers know real-time what is happening with app users. Developers need to respond to users’ issues quickly, but with limited time and a lot of data to interpret, it is difficult to know which crash issues to tackle first. Xamarin Insights uses a unique algorithm to rank issues according to user impact and reach, so developers know which issues to prioritize. They can see exactly which users each crash is impacting, and what sequence of actions preceded the crash.Integrating event information with user data makes it easier to solve problems and communicate proactively with affected users. Xamarin Insights integrates with Jira, HipChat, GitHub, Campfire, Pivotal Tracker and TFS Online so that developers are instantly notified, and issues are tracked.Xamarin Insights is available as a public beta.
  • Xamarin Test Cloud Hyper-Parallel Feature – Xamarin Test Cloud enables mobile teams to quickly test apps written in any language on over 1,000 mobile devices. A single test run may take a few hours to run on device, but with parallelization Xamarin Test Cloud is able to break that run up and execute a single test suite across multiple duplicate devices simultaneously, significantly increasing test results velocity. These new features are immediately available to Xamarin Test Cloud customers. Xamarin’s internal benchmarking test suite takes 2.5 hours to run serially. The new hyper-parallelization feature cuts that down to 12 minutes. This kind of optimization greatly reduces time spent waiting for feedback, which is key in achieving a rapid development process.

“As mobility continues to pervade our work and personal lives, developers are under more pressure than ever to build high quality apps quickly,” said Nat Friedman, CEO and cofounder, Xamarin. “Because mobile is so strategic to business growth and competitive advantage, developers are holding their company’s future in their code. Xamarin will be with them at every step of the mobile app lifecycle, making things faster and easier so that they can focus on delivering great apps.”

A little bit of History about Shellshock bug

The year was 1987, and as Fox drove cross-country to his new home, the tapes held a software program called Bash, a tool that Fox had built for the UNIX operating system and tagged with a license that let anyone use the code and even redistribute it to others. Fox—a high school dropout who spent his time hanging out with MIT computer geeks such as Richard Stallman—was a foot soldier in an ambitious effort to create software that was free, hackable, and unencumbered by onerous copy restrictions. It was called the Free Software Movement, and the idea was to gradually rebuild all of the components of the UNIX operating system into a free product called GNU and share them with the world at large. It was the dawn of open source software.

Brian Fox.

Fox and Stallman didn’t know it at the time, but they were building the tools that would become some of the most important pieces of our global communications infrastructure for decades to come. After Fox drove those tapes to California and went back to work on Bash, other engineers started using the software and even helped build it. And as UNIX gave rise to GNU and Linux—the OS that drives so much of the modern internet—Bash found its way onto tens of thousands of machines. But somewhere along the way, in about 1992, one engineer typed a bug into the code. Last week, more then twenty years later, security researchers finally noticed this flaw in Fox’s ancient program. They called it Shellshock, and they warned it could allow hackers to wreak havoc on the modern internet.

Shellshock is one of the oldest known and unpatched bugs in the history of computing. But its story isn’t that unusual. Earlier this year, researchers discovered another massive internet bug, called Heartbleed, that had also languished in open source software for years. Both bugs are indicative of a problem that could continue to plague the internet unless we revamp the way we write and audit software. Because the net is built on software that gets endlessly used and reused, it’s littered with code that dates back decades, and some of it never gets audited for security bugs.

When Bash was built, no one thought to audit it for internet attacks because that didn’t really make sense. “Worrying about this being one of the most [used] pieces of software on the planet and then having malicious people attack it was just not a possibility,” Fox says. “By the time it became a possibility, it had been in use for 15 years.” Today, it’s used by Google and Facebook and every other big name on the internet, and because the code is open source, any of them can audit it at any time. In fact, anyone on earth can audit it at anytime. But no one thought to. And that needs to change.

How the Web Was Built

In digital terms, Fox’s Bash program was about the same size as, say, a photograph snapped with your iPhone. But back in 1987, he couldn’t email it across the country. The internet was only just getting off the ground. There was no world wide web, and the most efficient way to move that much data across the country was to put it in the trunk of a car.

Bash is a shell utility, a black-boxy way of interfacing with an operating system that predates the graphical user interface. If you’ve used Microsoft’s Windows command prompt, you get the idea. That may seem like an archaic thing, but as the internet took off, fueled by web browsers and the Apache web server, the Bash shell became a simple yet powerful way for engineers to glue web software to the operating system. Want your web server to get information from the computer’s files? Make it pop up a bash shell and run a series of commands. That’s how the web was built—script by script.

Today, Bash is still an important part of the toolkit that helps power the web. It’s on the Mac, and virtually any company that runs the Linux operating system, the descendant of UNIX, uses it as a quick and easy way to connect computer programs—web server software, for example—with the underlying operating system.

But the lead developer of the program doesn’t work for any of these big names. He doesn’t even work for a tech company. His name is Chet Ramey, and he’s a coder at Case Western Reserve University in Cleveland. He works on Bash in his spare time.

‘Quite a Long Time’

In the late 1980s, Ramey took over from Brian Fox as the lead developer of Bash, and this September 12, he received an email from a security researcher named Stephane Chazelas that identified the Shellshock bug. It was a serious security vulnerability that the world learned about last week. Within hours, hackers had released code that could take over vulnerable machines and turn them into a malicious botnet.

Chet Ramey in an undated family photo.

Ramey doesn’t have access to the project’s source code revision logs dating back to the early ’90s, but he thinks that he probably wrote the buggy code himself, sometime around 1992. That would make it the oldest, significant-yet-unpatched bug what we’ve heard of here at WIRED. We checked with someone who would know—Purdue University Professor Eugene Spafford—and he couldn’t top it. “I can’t recall any others that were [unpatched] quite as long as this,” he says. “There are undoubtedly a number that have been out there longer, but the combination of age and potential impact would not be as large.”

But it’s a situation that feels eerily familiar to people familiar with Heartbleed, which was discovered in an widely used open-source project called OpenSSL.1 Like the OpenSSL software, Bash has never had a full-blown security audit, and it’s developed by a skeleton crew with virtually no financial support. That, unfortunately, is the story of the internet.

The Lie of ‘Many Eyes’

For Robert Graham, the CEO of consultancy Errata Security, Shellshock gives lie to a major tenet of open-source software: that open-source code permits “many eyes” to view and then fix bugs more quickly than proprietary software, where the code is kept out of view from most of the world. It’s an idea known as Linus’s Law. “If many eyes had been looking at bash over the past 25 years, these bugs would’ve been found a long time ago,” Graham wrote on his blog last week.

Linus Torvalds—the guy that Linus’s Law is named after and the guy who created the Linux operating system—says that the idea still stands. But the fallacy is the idea that all open-source projects have many eyes. “[T]here’s a lot of code that doesn’t actually get very many eyes at all,” he says. “And a lot of open-source projects don’t actually have all that many developers involved, even when they are fairly core.”

This kind of issue comes up with any software code—whether it’s open source or not. After all, it’s even harder to tell how many bugs like this may lurk in closed-source software such as Oracle’s database. About a decade ago, Microsoft faced serious security problem because parts of its software weren’t properly audited. But after the Blaster worm tore though systems running Microsoft’s Windows operating system in 2003, the company made security audits a priority. Over the course of the next decade, it improved the standards of its code. Microsoft spent millions on security audits and it hired white-hat hackers, called pen testers, to test out its software. Now, the open source community is starting to do the same thing.

This May, not long after the public first learned about the Heartbleed vulnerability, the Linux Foundation amassed a $6 million war chest to shore up the security on a few widely used open source projects, including OpenSSL, OpenSSH, and the Network Time Protocol. But Bash wasn’t on the list. “This was not predicted,” says Jim Zemlin, the Foundation’s executive director. “But certainly, my guys are reaching out to those folks to see how we can help as we speak.”

That’s all well and good. But the trick is to shore up the internet before the bugs are found. Hopefully, the Linux Foundation—and the Googles and the Facebooks—can do so.

Even with Shellshock, Brian Fox is still proud of the project he once drove across the country. “It’s been 27 years of that software being out there before a bug was found,” he says. “That’s a pretty impressive ratio of usage to bugs found.”

Using the NEW NuGet Package Explorer to Create, Explore and Publish Packages!

nuget

In the past few years, NuGet has become one of the easily and most commonly used tools within a .NET Developers bag of tricks and rightfully so. Long gone are the days of searching for a DLL file in some shady site and hoping that it doesn’t brick your application. Now you can find just about every possible reference you would want to include within your application in just a few clicks (and letting it sort out all of the dependencies).

Most developers have likely interacted through NuGet within Visual Studio, however this post is going to introduce another way to interact, explore and even publish your own NuGet packages called the NuGet Package Explorer.

What is the NuGet Package Explorer?

The NuGet Package Explorer is a open-source product of NuGet developer Luan Nguyen and was developed as an extremely user-friendly GUI application for easily creating and exploring NuGet packages. After installing the ClickOnce application, you can simply double-click on a NuGet Package file (.nupkg) to access all of its content or you can load packages directly from the official NuGet feed.

“This is a side project of mine and it is NOT an official product from Microsoft.” –  NuGet developer Luan Nguyen

How to use it?

First, you’ll need to visit the NuGet Package Explorer page on CodePlex, where the tool is currently available and download it. After a short download, you can launch the ClickOnce application and be presented with the following screen :

JustLaunched

These are your primary options when it comes to creating or exploring the contents of any available NuGet packages (in addition to simply clicking on any NuGet Package files as mentioned earlier). The easiest approach to get started would probably be to open up a package from the feed, which will present you with a searchable dialog with all of the most popular NuGet packages :

PopularPackages

After clicking on a package, you can choose the particular version you wanted to explore :

allversions

You also have the option of manually opening up any packages that you might have locally installed, but simply grabbing them from the feed is usually going to be the way to go.

Exploring a Package

Once you select a package that you want to explore a bit more, you can just double-click on it to present the details about that package :

An example of exploring the EntityFramework NuGet Package.

While exploring a package, you’ll see many of the summaries, details and descriptions that you might be accustomed to seeing when managing your NuGet packages through Visual Studio along with a bit more.

You’ll see an area called Package Contents, which display all of the files that are contained within the package and it can help give you an idea of the different versions of the framework that it targets, any transformations that will be applied to configuration files and any additional utilities or executables that might be run when the package is installed :

PackageContents

This is where you can really actually explore the packages by digging into the contents a bit more. By simply double-clicking on a file within the contents, you will be shown a preview (if available) of the contents :

ContentsDetails

This can be done for just about any kind of file that would normally support previews and it can be extremely useful if you wanted to see exactly what is going down inside some of these packages.

Creating a NuGet Package

The Package Explorer isn’t limited to just exploring existing packages. It provides a very easy-to-use interface to allow you to create your own packages and upload them to NuGet to share with others.

With a simple click of the File > New option menu or by using the CTRL+N shortcut :

started

You’ll be transported to a new package screen to begin building your own NuGet Package. You can click the Edit Metadata icon ( EditMetadata) to began actually editing information about your package :

MetaData

You can find a complete reference of all of the available fields listed above and exactly what they are used for by visiting the Nuspec Reference page here.

After defining all of your metadata, supported assemblies and dependencies, you will then be ready to add your files and content to your packages. You can do this by just clicking a file within the File Explorer and dragging it into the Package Contents area on the left :

SampleFile

All of the DLL files that are entered will be placed into the lib directory and all other basic content will be placed into an aptly named content directory as seen below :

PackageContent

Additionally, if you need to add other folders (or any other “special” types of folders), you can do so by using the Content menu :

OtherContent

You can continue to add all of the additional files and folders for your package in this same manner until your package is complete.

Publishing to NuGet

Publishing to NuGet is fairly simple after you have build your package.

The first thing that you’ll need to do is Register and Sign In to the NuGet Gallery, which takes a matter of seconds. This will provide you with an API key that you will need to use in order to publish packages to NuGet :

Api

After you have your API Key, you’ll just need to use the Publish option (File > Publish) within the NuGet Package Explorer :

Publish

Just enter in your API Key in the Publish dialog and hit Publish and you are done!

A Chrome Shortcut That Makes Searching Your Favorite Sites Way Faster!

It works on Vimeo, and scores of other sites with their own search engines.

Google is the default search engine in Chrome’s “Omnibox,” the text field at the top of your browser where the URL appears and where you type search terms. But you can add a limitless number of other, site-specific search engines to it as well. This comes in handy if you’re looking for particular piece of content, like a news story or video, or you want to find something on a particular site, like Craigslist or eBay, as quickly as possible.

First, go to a website you frequent. Find its search bar, and right-click inside the text field. In most cases, you should see an option to “Add As Search Engine”. A pop-up will appear with the option to give it a name, and most importantly, a keyword. I recommend something as minimal as possible for maximum typing efficiency: A for Amazon, K for Kickstarter, N for Netflix.

Now, in Chrome’s search bar, type in that “keyword” and hit TAB. A block with the site’s name will appear, letting you know Chrome has changed from searching Google to searching, say, Wikipedia. Once you hit ENTER, you’ll be rerouted straight to a list of results from the site’s search engine.

Here are 20 sites where this works, and recommended shortcut keys:

(A) Amazon
(B) Bing
(BBC) BBC
(C) Craigslist
(D) DuckDuckGo
(E) eBay
(ESPN) ESPN
(Git) GitHub
(H) Hulu
(HP) Huffington Post
(K) Kickstarter
(LINK) LinkedIn
(M) GMail
(N) Netflix
(P) Pinterest
(PB) Pirate Bay
(RED) Reddit
(T) Twitter
(V) Vimeo
(W) Wikipedia
(WRD) WIRED
(Y) Yelp
(YT) YouTube

Source 🙂

Don’t Use jquery-latest.js

Following is transcript from blog in jQuery.com:

Earlier this week the jQuery CDN had an issue that made the jquery-latest.js and jquery-latest.min.js files unavailable for a few hours in some geographical areas. (This wasn’t a problem with the CDN itself, but with the repository that provides files for the CDN.) While we always hope to have 100% uptime, this particular outage emphasized the number of production sites following the antipattern of using this file. So let’s be clear: Don’t use jquery-latest.js on a production site.

We know that jquery-latest.js is abused because of the CDN statistics showing it’s the most popular file. That wouldn’t be the case if it was only being used by developers to make a local copy. The jquery-latest.js and jquery-latest.min.js files were meant to provide a simple way to download the latest released version of jQuery core. Instead, some developers include this version directly in their production sites, exposing users to the risk of a broken site each time a new version of jQuery is released. The team tries to minimize those risks, of course, but the jQuery ecosystem is so large that we can’t possibly check it all before making a new release.

To mitigate the risk of “breaking the web”, the jQuery team decided back in 2013 that jquery-latest.js could not be upgraded to the 2.0 branch even though that is technically the latest version. There would just be too many sites that would mysteriously stop working with older versions of Internet Explorer, and many of those sites may not be maintained today.

As jQuery adoption has continued to grow, even that safeguard seems insufficient to protect against careless use of http://code.jquery.com/jquery-latest.js. So we have decided to stop updating this file, as well as the minified copy, keeping both files at version 1.11.1 forever. The latest released version is always available through either the jQuery core download page or the CDN home page. Developers can download the latest version from one of those pages or reference it in a script tag directly from the jQuery CDN by version number.

The Google CDN team has joined us in this effort to prevent inadvertent web breakage and no longer updates the file at http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.js. That file will stay locked at version 1.11.1 as well. However, note that this file currently has a very short cache time, which means you’re losing the performance benefit of of a long cache time that the CDN provides when you request a full version like 1.11.1 instead.

So please spread the word! If you see a site directly using the jQuery CDN’s jquery-latest.js or the Google CDN equivalent in their script tags, let them know they should change to a specific version. If you need the latest version, get it from the download page or our CDN page. For both the jQuery and Google CDNs, always provide a full version number when referencing files in a <script>tag. Thanks!

The Technology Behind the World Cup’s Advanced Analytics

SportsTech1

During Sunday’s 2-2 World Cup draw, the American forward Clint Dempsey, who scored one of two goals against Portugal, ran a total of 9,545 meters, with a top speed of 28.33 km/h over the course of 26 sprints. That wasn’t the top speed of the match, however, a title held by US defender Fabian Johnson, who reached an impressive 32.98 km/h.

Over the course of the match, detailed statistics, including dozens of other data points—the Americans ran a total of 110,299 meters, compared to the Portuguese side’s 106,520 meters, for example—are collected for every player on both teams, and displayed for television audiences worldwide, as well as posted to the FIFA website.

The system FIFA employs to grab the data, called Matrics, was built and deployed by an Italian firm called Deltatre at each stadium in the World Cup, and involves the use of several technologies and manual inputs from a large crew to deliver the real-time stats.

“The real value is that it’s live,” Tomas Robertsson, Deltatre’s North American commercial director, told me over the phone. “The extensive data set in real time provides on site heat maps and attacking zones, as well as distance run, passes completed, and many other statistics.”

Stats from Sunday’s US-Portugal match.fifaStats

The 2014 introduction of goal-line technology was a great leap forward for the international tournament; a technology that’s supposed to mitigate damn clear injustices such as Frank Lampard’s goal that the refs didn’t see. But it’s just one piece of high-tech gear that’s deployed at every area in the World Cup.​

Robertsson explained that the system works like this: Three HD cameras in various locations at each arena use image recognition to recognize the 22 players, three referees, and the soccer ball. The system tracks the XYZ coordinates of each of the objects, and then relays the information to a multi-screen, digital workstation where 74 people pour over the data on-site, aided by another 20 back in Italy.

The reason Deltatre uses cameras to optically track everything is that soccer players have resisted adding tracking technology into their equipment—such as their shoes—despite it being possible for some time.

A Deltatre Matrics operations center
A DELTATRE MATRICS OPERATIONS CENTER

On top of the cameras, the company has written algorithms that calculate passing stats, ball possession, and other statistics, totaling 350 in all. When the tracking information is relayed to a terminal, a human operator watching a slightly delayed version of the match validates each action before its sent live to the web or TV.

The image recognition technology isn’t entirely automatic. At the beginning of each match operators have to tell the machine which team is which color—and the color of the refs’ shirts—as well as manually input each player on the pitch. While the match is getting played out, the tracking system knows which team has touched the ball because a person with a video game controller hits a button for the corresponding team.

Unlike many other sports, Robertsson said that soccer isn’t a stats heavy game, but the information gleaned from the system can still be pretty useful in determining player performance. For example, it’s much easier to determine player fatigue simply by looking at the numbers.

Off the field, most of the teams at the World Cup use some kind of system to analyze each match to a similar level of detail, but after the fact. The collected data is analyzed by coaches and trainers to help ensure that teams are working at their peak potential.