2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 800 times in 2015. If it were a cable car, it would take about 13 trips to carry that many people.

Click here to see the complete report.

Charge your devices Wirelessly!

During the recently concluded CES 2015, there were some products that tried to offer new ways of charging wirelessly, and one of those is called WattUp.

Instead of charging your smartphone or tablet or e-reader by plugging it on a wall socket or placing it on a charging mat, WattUp is wall-mounted and serves as a kind of “router” that can charge devices up to 15 feet away. The makers say it can charge up to 12 devices simultaneously, but of course the more you charge, the less power it can distribute. Four devices that are 15 feet away will be able to receive 1W of power but as you go closer, you will get more. But if you have all 12 gadgets within the field of power, they will receive just .25W each.

To manage power distribution, WattUp also has an app to help you out. You can set it up so that your smartphone or tablet will only charge when it has reached a certain percentage so that it will not automatically charge whenever you get into the range of the device. This kind of wireless charger can be pretty handy at the office if you don’t want employees constantly plugging their devices on your wall sockets. But if you have a lot of gadgets and family members at home, then it can also be handy (or wallsy).

WattUp is still not available for retail consumers, but Foxconn and Haier have already expressed interest in manufacturing the device or maybe using the technology for some other device. Let’s wait and see if this will be a new trend in wireless charging.

Future of “Big Data”

If you’ve never heard of big data before, you might be interested to know you’re a part of it. Big data is a term used to describe the unbelievable growth and accessibility of data and information on the Internet. The amount of information we put online everyday is astonishing. Think about it, there are millions and billions of people using the Internet everyday, and if each one put out just a little bit of information, that number would grow exponentially in no time.

According to the infographic, big data has almost quadrupled in the last three years, with no signs of slowing down. The good news is that hardware costs are lessening as software and services are becoming responsible for big data revenue.

The term ‘big data’ also encompasses all of the complex data sets – whether they’re structured or not – floating around the Internet.

 

future-big-data-market-640x2836

What’s newly added features in Xamarin

Xamarin

Xamarin, the company with more than 750,000 mobile developers delivering mission-critical enterprise and consumer apps, today announced major expansions to their product lineup that radically improve how developers build, test and manage apps.

At the company’s global developer conference, Xamarin Evolve 2014, the largest cross-platform mobile development event in the world, the company introduced Xamarin Insights and the Xamarin Android Player, along with new features for the recently launched testing service, Xamarin Test Cloud, and new features for the Xamarin mobile development platform. These announcements are the realization of the company’s mission to make it fast, easy and fun to build great mobile apps.

Effectively delivering quality apps is no easy feat in today’s highly complex mobile development landscape, with multiple versions of various operating systems, an incredibly divergent variety of hardware sizes and capabilities, and users with extremely high expectations who will quickly abandon slow apps with poor user experiences.

With today’s announcements, Xamarin provides developers a mobile-first, fully integrated and seamless experience which simplifies and accelerates every stage of the application development lifecycle.

“Our enterprise customers look to Avanade to help them envision what is possible for their current and future mobile needs,” said Dan O’Hara, Avanade vice president of mobility. “Xamarin delivers unmatched technology that allows Avanade to transform these mobile strategies into successful reality for our customers.”

New Xamarin Platform Capabilities for Building Mobile Apps

  • Xamarin Android Player – Android developers waste countless hours fighting slow emulator performance and long startup times when deploying and testing apps. Xamarin Android Player provides developers dramatically shorter start up times and the best possible emulator performance through hardware-virtualization and hardware accelerated graphics. The Player makes it easy to test and demo hardware features such as the ability to simulate low battery conditions and set GPS coordinates, and Xamarin will soon release the ability to simulate the back and front-facing camera.The Xamarin Android Player is available as a preview release.
  • Sketches – The Xamarin Platform now includes an easy, lightweight way for developers to explore iOS and Android APIs in C# and F#. From inside the IDE, developers can create Sketches that show their code executing in real-time. New mobile developers using C# gain a powerful, yet simple, way to explore iOS and Android, while experienced Xamarin developers now have a fast way to iterate on features and explore new APIs without the overhead of building and running a project.Sketches are available today for iOS, Android, and Mac as a preview in the Xamarin Studio Beta Channel, and are coming soon to Visual Studio for Android, iOS and Windows.

New Mobile App Testing and Monitoring Capabilities

  • Xamarin Insights – Xamarin Insights is a new app monitoring service that tracks app crashes and exceptions, and helps developers know real-time what is happening with app users. Developers need to respond to users’ issues quickly, but with limited time and a lot of data to interpret, it is difficult to know which crash issues to tackle first. Xamarin Insights uses a unique algorithm to rank issues according to user impact and reach, so developers know which issues to prioritize. They can see exactly which users each crash is impacting, and what sequence of actions preceded the crash.Integrating event information with user data makes it easier to solve problems and communicate proactively with affected users. Xamarin Insights integrates with Jira, HipChat, GitHub, Campfire, Pivotal Tracker and TFS Online so that developers are instantly notified, and issues are tracked.Xamarin Insights is available as a public beta.
  • Xamarin Test Cloud Hyper-Parallel Feature – Xamarin Test Cloud enables mobile teams to quickly test apps written in any language on over 1,000 mobile devices. A single test run may take a few hours to run on device, but with parallelization Xamarin Test Cloud is able to break that run up and execute a single test suite across multiple duplicate devices simultaneously, significantly increasing test results velocity. These new features are immediately available to Xamarin Test Cloud customers. Xamarin’s internal benchmarking test suite takes 2.5 hours to run serially. The new hyper-parallelization feature cuts that down to 12 minutes. This kind of optimization greatly reduces time spent waiting for feedback, which is key in achieving a rapid development process.

“As mobility continues to pervade our work and personal lives, developers are under more pressure than ever to build high quality apps quickly,” said Nat Friedman, CEO and cofounder, Xamarin. “Because mobile is so strategic to business growth and competitive advantage, developers are holding their company’s future in their code. Xamarin will be with them at every step of the mobile app lifecycle, making things faster and easier so that they can focus on delivering great apps.”

A little bit of History about Shellshock bug

The year was 1987, and as Fox drove cross-country to his new home, the tapes held a software program called Bash, a tool that Fox had built for the UNIX operating system and tagged with a license that let anyone use the code and even redistribute it to others. Fox—a high school dropout who spent his time hanging out with MIT computer geeks such as Richard Stallman—was a foot soldier in an ambitious effort to create software that was free, hackable, and unencumbered by onerous copy restrictions. It was called the Free Software Movement, and the idea was to gradually rebuild all of the components of the UNIX operating system into a free product called GNU and share them with the world at large. It was the dawn of open source software.

Brian Fox.

Fox and Stallman didn’t know it at the time, but they were building the tools that would become some of the most important pieces of our global communications infrastructure for decades to come. After Fox drove those tapes to California and went back to work on Bash, other engineers started using the software and even helped build it. And as UNIX gave rise to GNU and Linux—the OS that drives so much of the modern internet—Bash found its way onto tens of thousands of machines. But somewhere along the way, in about 1992, one engineer typed a bug into the code. Last week, more then twenty years later, security researchers finally noticed this flaw in Fox’s ancient program. They called it Shellshock, and they warned it could allow hackers to wreak havoc on the modern internet.

Shellshock is one of the oldest known and unpatched bugs in the history of computing. But its story isn’t that unusual. Earlier this year, researchers discovered another massive internet bug, called Heartbleed, that had also languished in open source software for years. Both bugs are indicative of a problem that could continue to plague the internet unless we revamp the way we write and audit software. Because the net is built on software that gets endlessly used and reused, it’s littered with code that dates back decades, and some of it never gets audited for security bugs.

When Bash was built, no one thought to audit it for internet attacks because that didn’t really make sense. “Worrying about this being one of the most [used] pieces of software on the planet and then having malicious people attack it was just not a possibility,” Fox says. “By the time it became a possibility, it had been in use for 15 years.” Today, it’s used by Google and Facebook and every other big name on the internet, and because the code is open source, any of them can audit it at any time. In fact, anyone on earth can audit it at anytime. But no one thought to. And that needs to change.

How the Web Was Built

In digital terms, Fox’s Bash program was about the same size as, say, a photograph snapped with your iPhone. But back in 1987, he couldn’t email it across the country. The internet was only just getting off the ground. There was no world wide web, and the most efficient way to move that much data across the country was to put it in the trunk of a car.

Bash is a shell utility, a black-boxy way of interfacing with an operating system that predates the graphical user interface. If you’ve used Microsoft’s Windows command prompt, you get the idea. That may seem like an archaic thing, but as the internet took off, fueled by web browsers and the Apache web server, the Bash shell became a simple yet powerful way for engineers to glue web software to the operating system. Want your web server to get information from the computer’s files? Make it pop up a bash shell and run a series of commands. That’s how the web was built—script by script.

Today, Bash is still an important part of the toolkit that helps power the web. It’s on the Mac, and virtually any company that runs the Linux operating system, the descendant of UNIX, uses it as a quick and easy way to connect computer programs—web server software, for example—with the underlying operating system.

But the lead developer of the program doesn’t work for any of these big names. He doesn’t even work for a tech company. His name is Chet Ramey, and he’s a coder at Case Western Reserve University in Cleveland. He works on Bash in his spare time.

‘Quite a Long Time’

In the late 1980s, Ramey took over from Brian Fox as the lead developer of Bash, and this September 12, he received an email from a security researcher named Stephane Chazelas that identified the Shellshock bug. It was a serious security vulnerability that the world learned about last week. Within hours, hackers had released code that could take over vulnerable machines and turn them into a malicious botnet.

Chet Ramey in an undated family photo.

Ramey doesn’t have access to the project’s source code revision logs dating back to the early ’90s, but he thinks that he probably wrote the buggy code himself, sometime around 1992. That would make it the oldest, significant-yet-unpatched bug what we’ve heard of here at WIRED. We checked with someone who would know—Purdue University Professor Eugene Spafford—and he couldn’t top it. “I can’t recall any others that were [unpatched] quite as long as this,” he says. “There are undoubtedly a number that have been out there longer, but the combination of age and potential impact would not be as large.”

But it’s a situation that feels eerily familiar to people familiar with Heartbleed, which was discovered in an widely used open-source project called OpenSSL.1 Like the OpenSSL software, Bash has never had a full-blown security audit, and it’s developed by a skeleton crew with virtually no financial support. That, unfortunately, is the story of the internet.

The Lie of ‘Many Eyes’

For Robert Graham, the CEO of consultancy Errata Security, Shellshock gives lie to a major tenet of open-source software: that open-source code permits “many eyes” to view and then fix bugs more quickly than proprietary software, where the code is kept out of view from most of the world. It’s an idea known as Linus’s Law. “If many eyes had been looking at bash over the past 25 years, these bugs would’ve been found a long time ago,” Graham wrote on his blog last week.

Linus Torvalds—the guy that Linus’s Law is named after and the guy who created the Linux operating system—says that the idea still stands. But the fallacy is the idea that all open-source projects have many eyes. “[T]here’s a lot of code that doesn’t actually get very many eyes at all,” he says. “And a lot of open-source projects don’t actually have all that many developers involved, even when they are fairly core.”

This kind of issue comes up with any software code—whether it’s open source or not. After all, it’s even harder to tell how many bugs like this may lurk in closed-source software such as Oracle’s database. About a decade ago, Microsoft faced serious security problem because parts of its software weren’t properly audited. But after the Blaster worm tore though systems running Microsoft’s Windows operating system in 2003, the company made security audits a priority. Over the course of the next decade, it improved the standards of its code. Microsoft spent millions on security audits and it hired white-hat hackers, called pen testers, to test out its software. Now, the open source community is starting to do the same thing.

This May, not long after the public first learned about the Heartbleed vulnerability, the Linux Foundation amassed a $6 million war chest to shore up the security on a few widely used open source projects, including OpenSSL, OpenSSH, and the Network Time Protocol. But Bash wasn’t on the list. “This was not predicted,” says Jim Zemlin, the Foundation’s executive director. “But certainly, my guys are reaching out to those folks to see how we can help as we speak.”

That’s all well and good. But the trick is to shore up the internet before the bugs are found. Hopefully, the Linux Foundation—and the Googles and the Facebooks—can do so.

Even with Shellshock, Brian Fox is still proud of the project he once drove across the country. “It’s been 27 years of that software being out there before a bug was found,” he says. “That’s a pretty impressive ratio of usage to bugs found.”

Using Visual Studio AutoRecover to Avoid Losing Your Work (and your Mind)

Mondays

If you have ever worked in an environment that may not have been the most reliable or you are simply a worry-wart when it comes to possible data loss, then this might be for you.

When discussing a recurring issue with another developer, he complained that his older computer frequently crashed which resulted in him losing data quite often. He didn’t seem to be aware of a feature within Visual Studio called AutoRecover that would help him ensure that his data wasn’t always “gone forever”. While it typically works on it’s own, this post will discuss AutoRecover, configuring it and locating all of the temporary files it creates if you need to find them.

What is AutoRecover?

AutoRecover is an option within Visual Studio that will allow you to define a particular interval in which to save information about all of the files that may be changing or have changed within your application. It’s perfect for scenarios where you might have a machine that is on the verge of death, intermittent power issues, data-loss anxiety and just about anything else that might have you spouting off expletives when your screen goes black (or blue).

How do I use it?

You can access the AutoRecover option in Visual Studio through the following steps :

  1. Select Tools from the Toolbar in Visual Studio.
  2. Choose Options
  3. Select AutoRecover beneath the Environment tab on the right.

You’ll then be presented with the AutoRecover dialog box :

Autorecover

You’ll just need to adjust these values depending on your needs and how erratic your machine can be.

Accessing AutoRecover Files

Accessing the files themselves is quite easy as well. By default, Visual Studio will store all of these files within a “Backup Files” directory which is typically located at the following location :

~/Documents/{Your Version of Visual Studio}/Backup Files/{Your Project Name}

Within this folder, you will see a collection of files that are both original and those that have been previously recovered as seen below :

files

Visual Studio will typically prompt you to open these files if your system experienced an unexpected shut down when opening an affected solution or project, but this should provide an easy way to access these files if things do go horribly wrong.