Learning to Code with Robots

With STEM education being more prevalent these days, I was curious about a number of toys on the market geared towards teaching kids how to code. With all the options out there, which toy is the best investment? In the interest of scientific inquiry, I picked up four popular toys that support both simple block-based coding as well as advanced coding languages and gave them a try.

The Robots

DashDash is an adorable little robot that was created by Wonder Workshop. It also has a little sibling, Dot, which is a non-mobile version of Dash. The two robots can be programmed to communicate with each other. The first thing you will notice about Dash is its giant white LED eye and the cheery “Hi!” greeting when you turn it on. Not only can the Dash move directionally, but it can turn its head and react to voices and claps. It also has one colored light on each side of its head and one colored light below its eye. As far as peripherals go, Dash has three embedded microphones for sensing sound, two infrared (IR) sensors for sensing distance, and a speaker to play sound. Dash can be programmed via an iPad or Android tablet.

SpheroSphero is the simplest robot of the group. It does one thing, but it does it well: roll. The entire ball lights up with RGB LEDs which can be controlled independently from the motion. Sphero is also surprisingly fast – it can reach a top speed of 4.5 miles per hour. There is also a neat clear version of Sphero aimed at education. The Sphero doesn’t have any external sensors per se, but it can detect impact and being picked up thanks to an internal gyroscope and accelerometer. When you first turn on Sphero, you have to do an orientation calibration routine so it can understand where you are in relation to the robot. Sphero can be programmed by an iOS (iPhone/iPad/iPod Touch) or Android device.

LEGO Mindstorm EV3The most complex of the four robots is the LEGO Mindstorm EV3. This kit comes with a programmable brick, a handful of sensors, two motors, and 550+ LEGO Technic parts for creating just about anything you can imagine. To build a LEGO robot, you first build a LEGO structure then attach the programmable brick and various sensors or motors depending on what you want your robot to do. The sensors connect to the programmable brick via connector cables and the robot is programmed with a Mac or Windows computer. Although this method can be very time-consuming, it also seems to be the most flexible. There are many books and websites available to walk you through different robot builds and corresponding sample programs if you’re not quite sure where to start. For this evaluation, I built the standard TRACK3R robot from the Mindstorms manual using the infrared sensor for distance detection.

mBotAnother extensible robot is Makeblock’s mBot. Makeblock’s robots are built on top of an open source Arduino-based platform. The mBot is similar in spirit to the LEGO Mindstorm: you can combine a number of sensors with aluminum structure parts to come up with just about anything you can think of. Makeblock also offers many robotics kits with varying degrees of complexity, such as a 3D printer kit and a XY plotter kit (which can also be converted to a laser engraver). The mBot kit is specifically geared towards STEM education and comes with a number of sensors, such as an ultrasonic sensor, an infrared receiver, and a line follower as well as some on-board color lights. All robots on Makeblock’s platform can be programmed with a Windows or Mac computer using either their mBlock software or the Arduino software.

The Test Course

For the evaluation, I set up a simple evaluation course and assessed how hard it was to make the robot accurately navigate the course. The course consisted of these simple steps:

  • Go straight until it senses/runs into the first barrier
  • Flash the lights
  • Turn right
  • Go straight until it senses/runs into the second barrier
  • Flash the lights
  • Turn left
  • Go straight for a short distance
  • Spin in a circle three times
  • Flash the lights
Programming the Robots

For each robot, I used their block-based programming language to program the evaluation course instructions. For the uninitiated, block-based programming is a process where you drag block-like icons on a screen to create a chain of commands that represents a simple program. This method was widely popularized in education circles by MIT’s Scratch. By simplifying coding this way, people can become acquainted with the core concepts of programming without having to worry about the nuances of specific programming languages. Once someone is familiar with the basics of programming, it’s easier to understand more complex programming issues such as language syntax and scoping.


To program Dash, I used the accompanying iPad app called Blockly. The Blockly app has a several commands on the sidebar. To add a command to your program, simply click on the type of command you want to use, select the command you want and drag it over to the program area. The commands snap together to make a long vertical chain of commands, which are then executed when you click the start button. Blockly also supports using a few simple variables in the code if you want to keep track of things like the number of times Dash encountered an obstacle.

Blockly Program

One thing I really liked about Blockly was that many of the options were presented in terms of real-world values. So, for example, when you programmed Dash to move forward, you could select the distance in centimeters.

Dash Speed

All in all, the Blockly app was simple and easy to use. Connecting to the robot was as simple as holding down the robot icon until a green progress bar was full, indicating that the connection had been established. The only real issue I had with Blockly was that the app crashed on me a few times while trying to program Dash. This was not a serious deal breaker as my program was intact when I reopened the app.


To program Sphero, I used the SPRK app on my iPad. Just like Dash, there are groups of commands at the bottom and you simply drag the command you want to use in the program and snap it into place. Once your program is ready, click the run button and Sphero will start executing the commands. The SPRK app allows you to modify preset variables such as speed and heading and well as create your own custom variables .

The SPRK app uses a different programming paradigm than the other robots. Instead of reacting to a single event, like having an obstacle in front, there is one block of code that is executed for every time a given event happens. This made programming Sphero a little abstract at times and could be hard for someone new to programming to understand.

Sphero On Collision

I didn’t particularly like the SPRK app that much. I found the large amount of unusable space on the right annoying considering that I could not rotate the app to use the space. I also found that the SPRK app did not give simple feedback if there was something wrong with a program. For example, I got this somewhat cryptic error when trying to flash the lights when a collision occurs:

SPRK Error

That being said, one thing I really did like about the SPRK app was that at any time you could click on a code icon in the upper left corner and see how your block-based code translated into their Oval coding language. Being able to look at the underlying code could be really useful when transitioning to writing code in the corresponding programming language. I was also pleased with how simple it was to connect the robot to the SPRK app. The app would automatically establish the connection after the initial Bluetooth setup.

Oval Code

Mindstorms EV3

I programmed the EV3 using the corresponding Mindstorms software on a Mac computer. With this software, blocks are laid out horizontally and can be broken into separate lines to help with readability. I connected my EV3 to my computer via a Bluetooth connection, which not only allowed me to program the robot remotely, but also allowed me to see the real-time sensor values in the lower right corner. Executing a program was as simple as clicking the download and run buttons in the lower right corner.

Mindstorms Software

One thing that is initially frustrating about the Mindstorms software is that, much like LEGO manuals, they don’t really use words anywhere. At times it’s not intuitive on how certain blocks should be used. Additionally, since the TRACK3R robot used tank treads, there was no simple “move forward” command, but rather I had to specify the power and direction of each tank tread. Movement is time-based, so there’s no simple way of translating a time into an actual real-world distance.

Even though the Mindstorms software feels a bit abstract at times, it is still pretty powerful. The software supports custom variables and also allows you to build custom blocks using the My Block Builder feature of the software. You can also add comments to your program to help you keep track of what you are doing. Additionally, the software allows you to add your own sounds and images to be used by the robot. One interesting feature of the EV3 is that the programmable brick has its own program editor so that you can modify the program on the robot without having to use the computer. As far as connecting the EV3, I found the Bluetooth connection to be a bit hard to establish at times. To fix this, I had to reconnect the EV3 to the computer via a USB cable just to reestablish the lost Bluetooth connection.


Makeblock has their own derivative of Scratch called mBlock. In fact, it still has many of the same elements as Scratch, so you can make a cartoon panda dance on your screen while your robot is moving about. I found this handy for understanding what the robot was doing at times – I could just have the panda display the sensor values on my computer screen while my robot was running. I programmed mBot using the mBlock software on a Mac computer, connecting to the robot using a 2.4 GHz wireless serial connection. Connecting to the robot was as simple as selecting the connection type I wanted to use in the Connect menu.


The mBlock version of Scratch also has an Arduino mode, which allows you to see how your mBlock program translates to Arduino code. In order to use this mode, you cannot have any non-robot sprite commands in your program (so no dancing pandas). Much like Sphero, this helps you to visualize how the blocks translate to Arduino code. Unfortunately, the generated Arduino code can be a bit cryptic, especially for someone who may not be used to staring at written Arduino code.

mBlock Arduino

I thought that the mBlock software was really well designed and powerful. Those who have used Scratch before will find the software very easy to use. The window views are configurable so can you hide or resize different windows of the software. Like the Mindstorms software, mBlock allows you to create your own blocks or create custom variables for storing data. The mBlock app did crash on me a few times but I was easily able to reload my work from a saved file.


Here is a video of how each of the robots performed on the evaluation course:

Of the four robots, Dash was the fastest and easiest to program. The Blockly app was intuitive and Dash consistently executed the course as expected. The only issue I had with Dash is that it couldn’t navigate well on a rug.

The EV3 also made short work of the obstacle course. It took a bit of trial and error to figure out the exact motor settings for some of the tasks like turning. However, once the program was written, it consistently navigated the course without issue.

Sphero fared the worst of the four robots. Writing the Sphero code in an event-driven model felt somewhat unintuitive when compared to the procedural methods used by the other robots. Also, because Sphero really doesn’t have a front, the robot had to have the orientation calibrated each time I picked it up and reran the course. This quickly became annoying. Slight variances in calibration caused Sphero to veer off in different directions. It took multiple attempts to get the robot to execute the course correctly.

I really wanted to love mBot, but at the end of the day there were some issues with it. First, for some reason, the power to the wheels on my robot was not even, so my robot would always slightly veer to the right. A thorough inspection found no obvious reason for this and posts on the Makeblock forum showed that other people were experiencing the same issue with mBot. Second, the ultrasonic sensor readings were not normalized at all, so unexpected variances in the sensor readings sometimes caused the robot to prematurely turn. These issues made the evaluation runs far more frustrating than they should have been. Just like Dash, mBot also had issues navigating on a rug.


All in all, these are all great toys and any one of them would be an asset in getting anyone (especially kids) interested in programming. Dash was the easiest to use so I think it would be a great first robot for anyone, especially a younger child. The major drawback is that since most of the hardware was fixed, I could see this robot getting boring after a while. Furthermore, since Dash currently only supports programming languages with complex syntax like Objective C and Java, it would be harder to transition from the Blockly block-based programming to a full programming language.

I think the Mindstorms is the best option for people who want to have a platform on which they can grow. The LEGO hardware and software worked as expected without any issues. The Mindstorms software can be a bit confusing at first, but once you get past the initial learning curve, it’s very powerful in what it will allow you to do. As it’s been around the longest, it has lots of support material and has support for many programing languages, some of which are easier to learn (like Python). The major drawback to the EV3 robot is the high price point, which may not make it an ideal starter robot while you are still gauging your interest in coding.

The lower price point and great mBlock software still makes the Makeblock mBot kit an attractive option. My hope is that some of the initial kinks in the platform may later be worked out. It may be wiser to try a different Makeblock kit, like the slightly more expensive starter robot kit which comes with tank tread instead of wheels and a Bluetooth adapter which allows the robot to be manipulated through a mobile device. Much like Dash, the Makeblock robots can only be programmed with the complex Arduino language, which could make the transition to a full programming language more difficult. Fortunately, the Arduino mode in the mBlock software can help with that translation.

Comparison Chart

Dash Mindstorm EV3 Sphero 2.0 mBot
Price $170 $350 $130 $79
Age Range 5+ 10+ 8+ 8+
Power Rechargeable battery via micro USB 6 AA Batteries, rechargeable battery pack (sold separately) Rechargeable battery via dock 4 AA Batteries, rechargeable battery pack (sold separately)
Run time About 5 hours Varies on configuration About 1 hour  Varies on configuration
Connectivity iOS (iPad) and Android via Bluetooth, Computer via USB (future) Computer via USB, Bluetooth, WiFi (adapter not included) iOS (iPhone/iPad/iPod Touch) and Android via Bluetooth, Computer via Bluetooth Computer via USB or Wireless Serial, WiFi (adapter not included), Bluetooth (adapter not included)
Beginner Programming Blockly App LEGO Mindstorms EV3 Software SPRK App, Blockly Beta (via Chrome Browser), Macro Lab App, orbBasic App mBlock Software
Advanced Programming Objective C, Java (both still in private alpha phase) Ada, C/C++, Python, Java, C#, Perl, VisualBasic, Lisp, Prolog, Haskell and more Objective C, Swift, Android, Python, Ruby, Arduino, Node/JavaScript and more Arduino
Included Sensors Infrared, microphone/sound Infrared (and tracking beacon), color/light, touch Internal gyroscope and accelerometer Infrared (and remote), ultrasonic, line follower
Optional Sensors None Ultrasonic, sound, gyroscope None Accelerometer, compass, light, passive infrared, temperature, sound, touch
Sounds Yes (fixed set) Yes No Buzzer only
Lights Front and side RGB lights, white eye light Red/Green/Amber LED on power brick One RGB light Two RGB LEDs on board, many RGB LED modules (sold separately)

Pedestrian Safety in Manhattan

For the final project in my Realtime and Big Data Analytics class at NYU, I worked on an analysis of the effectiveness of pedestrian safety measures in Manhattan with fellow students Rui Shen and Fei Guan. The main idea behind this project was to look at the number of accidents occurring within a fixed distance of an intersection in Manhattan and determine if the accident rate correlated with any features of the intersection, such as the presence of traffic signals or high traffic volume. We used a number of big data tools and techniques (like Apache Hadoop and MapReduce) to analyze this data and found some rather interesting results.

The first step was to collect data about intersections, accidents, and various features of the intersections. To do this, we relied heavily on open source data sets. We extracted the locations of intersections, speed bumps, and traffic signals from OpenStreetMap. We used NYC Department of Transportation data for traffic volume information, traffic signal locations, and traffic camera locations. Finally, we used NYC Open Data for information on accident counts and traffic volume, as well as the locations of speed bumps, arterial slow zones, and neighborhood slow zones. Some of the data could be used mostly off of the shelf, but other datasets required further processing, such as normalizing traffic volume over time and geocoding the street addresses of traffic camera locations.

The next step was to merge the feature and accident data with the relevant intersections. To do this, we used big data tools to assign intersection identifiers to every corresponding feature and accident record. As Hadoop can’t natively handle spatial data, we needed some additional tools to help us determine which features existed within an intersection. There were three distinct types of spatial data that we needed to process: point data (such as accidents), line data (such as traffic volume) and polygon data (such as neighborhood slow zones). Fortunately, GIS Tools for Hadoop helped us solved this problem. The GIS Tools implement many spatial operations on top of Hadoop, such as finding spatial geometry intersections, overlaps, and inclusions. This toolkit also includes User Defined Functions (UDFs) which can be used with Hive. For this task, we used Hive and the UDFs to associate the feature and accident data with the appropriate intersections. We experimented with different sizes of spatial buffers around an intersection and decided that a twenty-meter radius captured most of the related data points without overlapping with other intersections.

Examples of the different types of spatial data we had to correlate with intersections: area data (blue), point data (red) and line data (green).
Examples of the different types of spatial data that could exist within an intersection: area data (blue), point data (purple) and line data (green).

Once all of the relevant data had an intersection identifier assigned to it, we wrote a MapReduce job to aggregate all of the distinct data sets into one dataset that had all of the intersection feature information in a single record. In the reduce stage, we examined all of the data for a given intersection and did some further reduction, such as normalizing the traffic volume value for the intersection or calculating the sum of all of the accidents occurring within the intersection buffer.

The last step was to calculate correlation metrics on the data. To do this, we used Apache Spark. We segmented the data set into thirds by traffic volume, giving us low, moderate, and high traffic volume data sets.  We then calculated Spearman and Pearson correlation coefficients between the accident rate and the individual features and then analyzed the results. Although most features showed very little correlation with the accident rate, there were a few features that produced a moderate level of correlation. First, we found that there is a moderate positive correlation between accidents and the presence of traffic lights. This seemed odd at first but on second consideration it made sense. I have seen many random acts of bravery occur at traffic signals where people would try to cross the street just as the light was changing. Second, we found that there was a moderate negative correlation between high traffic volume and accidents. Again, this was not immediately intuitive, but our speculation was that drivers and pedestrians would be more cautious at busy intersections.

As this project was only a few weeks long, we didn’t have time to do a more in-depth analysis. I think we would have found even more interesting results had we done a better multivariate analysis which would allow us to calculate correlation metrics across all variables instead of just examining single variant correlation. One observation that we made was that intersections in high-traffic business or tourist areas have different accident profiles than intersections in residential areas. Therefore, it would be wise to include more socio-economic information for each intersection, such as land-use information and population information.

Despite the time constraints, the small amount of analysis we did was very interesting and made me look at something as simple as crossing the street in a whole new light.

Live Streaming Video With Raspberry Pi


Much to my delight, I discovered that a pair of pigeons are nesting outside of my window. I decided to set up a live streaming webcam so I can watch the young pigeons hatch without disturbing the family. Instead of buying an off-the-shelf streaming solution, I used a Raspberry Pi and a USB webcam. Here is how I set up live streaming video using my Pi and Ustream.

For this project, I used a Raspberry Pi Model B+, a USB WiFi adapter, a microSD card, a USB webcam and a 5 volt power adapter. When selecting a USB webcam, try to get something on the list of USB webcams known to work with Raspberry Pi. It will save you a lot of headaches in the long run!


To start, download the latest Raspbian image and load it onto the SD card. My favorite tool for doing this on a Mac is Pi Filler. It’s no-frills, easy to use and free! It may help to connect the Pi to a monitor and keyboard when first setting it up. Once the Pi first comes up, you will be prompted to set it up using raspi-config. At this time, it’s a good idea to expand the image to use the full card space and set the internationalization options to your locale so that your keyboard works properly.

Once the Raspberry Pi boots up, there are a few things that need to be updated and installed. First, it’s a good idea to update the Raspbian image with the latest software. I also like to install webcam software, fswebcam, so I can test that the webcam works before setting up video streaming. Finally, you’ll need ffmpeg, which is software capable of streaming video. The following commands will set up the Raspberry Pi:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install fswebcam
sudo apt-get install ffmpeg

After installing the software, it’s a good idea to check whether the webcam works with the Raspberry Pi. To do this, simply take a picture with the webcam using fswebcam. This will attempt to take a single photo from the webcam. You can do this by running the following command:

fswebcam photo.jpg

If the photo looks good, then you are ready to set up streaming video. First, set up a Ustream account. I set up a free account which works well despite all of the ads.  Once you set up your video channel, you will need the RTMP URL and stream key for the channel. These can be found in Dashboard > Channel > Broadcast Settings > Encoder Settings.

Next, set up video streaming on the Raspberry Pi. To do this, I used avconv. The documentation for avconv is very dense and there are tons of options to read through. I found this blog post which helped me get started. I then made some adjustments, such as using full resolution video, adjusting the frame rate to 10 frames per second to help with buffering issues and setting the log level to quiet as to not fill the SD card with logs. I also disabled audio recording so I wouldn’t stream the laments of my cat for not being allowed to ogle the pigeons. I wrote this control script for my streaming service:


case "$1" in 
      echo "Starting ustream"
      avconv -f video4linux2 -r 10 -i /dev/video0 - pix_fmt yuv420p -r 10 -f flv -an -loglevel quiet <YOUR RTMP URL>/<YOUR STREAM KEY> &
      echo "Stopping ustream"
      killall avconv
      echo "Usage: ustream [start|stop]"
      exit 1

exit 0

Make sure the permissions of your control script are set to executable. You can then use the script to start and stop your streaming service. Before placing the webcam, it’s a good idea to see if you need to make any additional updates to the Raspberry Pi for your webcam to work. The webcam I chose, a Logitech C270, also required some modprobe commands to keep from freezing. Finally, it’s a good idea to add your control script to /etc/rc.local so that the streaming service automatically starts in case your Raspberry Pi accidentally gets rebooted.

And that’s it! There is a multiple second delay to the streaming service so within a minute you should see live streaming video on Ustream. One word of caution on working with the Raspberry Pi: be sure to shut down the Raspbian operating system before unplugging the Raspberry Pi. The SD card can become corrupted by just unplugging it. This will cause the operating system to go into kernel panic and refuse to boot.  Sadly, the only solution for this is to reinstall Raspbian and start all over again.

Once my webcam was up, I found that I had some issues positioning the camera effectively. To solve this, I bought a cheap mini camera tripod. I then dismantled the clip of my webcam and drilled a 1/4″ hole in the plastic so it would fit on the tripod. I put a 1/4″-20 nut on the top of the screw and I was good to go!


I will be live streaming the pigeon nest for the next month or so on this Ustream channel (Update: the baby pigeons have grown up and left the nest, so pigeon cam has been taken down).  I’ve learned a lot about pigeons by watching them every day. The squabs should hatch during the upcoming week and I am excited to watch them grow!


Master of Science

Things have been quiet on the project front recently as I have been busy finishing up one of my largest pursuits to date: a master’s degree in Computer Science from Courant Institute of Mathematical Sciences at New York University. I completed my degree part-time while working a full-time engineering job. It took me ten semesters to complete, which roughly translates to four academic years.

The quality of the education at NYU Courant was mostly good. I had some excellent professors who were experts in their respective fields. A few of my favorite classes were Realtime and Big Data Analytics, Operating Systems, and Statistical Natural Language Processing. Sadly, there were also a few classes that had some room for improvement. Some of my worse experiences included poorly organized professors and incredibly bland or irrelevant lectures. Despite those flaws, I felt that overall the program was challenging and interesting.

I met many fantastic people while getting my degree. During my time there I was able to be involved with NYU’s Women in Computing (WinC) group. WinC enabled me to be a part of a community of other women computer science students at all levels. I even gave a few talks on behalf of WinC about my experiences of being a woman in engineering, such as at the NYC Girls Computer Science and Engineering Conference at NYU and at the Women Chartering Technical Career Paths event at the Apple Store in SoHo.

Speaking at the Women Charting Technical Career Paths event at the Apple Store in SoHo
Speaking at the Women Charting Technical Career Paths event at the Apple Store in SoHo

So is it worthwhile to get a master’s degree? There are three things to consider when deciding whether to pursue graduate school: the value of the degree, the financial cost, and the time investment. First and foremost, it’s important to consider how much value the degree will add to your career. As far as technical skills, there are other ways of gaining the same skill set as an advanced degree. Many courses similar to my master’s program requirements can also be taken online through free class sites like Coursera and Udacity. Furthermore, the software industry tends to be a meritocracy in that your previous work experience can outweigh the name on your diploma. This means that a graduate degree may not add a lot of value if you already have an established career. Even with these considerations, having a master’s degree on your resume can open doors to opportunities that might not otherwise be available. Additionally, many companies prefer candidates with advanced degrees, especially at senior levels. The cost is another factor to be considered. My degree was $58,877, not including any books or materials. The financial price of the degree would have been prohibitively expensive if my company had not helped me pay for it. Finally, it’s important to consider how much time you have to invest in graduate school. Pursuing a degree full-time means that you will most likely not be earning wages for two years whereas a part-time program means that you will have limited free time for multiple years and the additional pressure of a career on top of graduate school. I had vastly underestimated how many weekends and late nights I would spend on class assignments. It meant making a lot of personal sacrifices and sitting inside working while everyone else was playing outside in the sunshine.

Speaking at the NYC Girls Computer Science and Engineering Conference at NYU
Speaking at the NYC Girls Computer Science and Engineering Conference at NYU

I’ve considered whether it would have been better to work on my master’s degree right after finishing my bachelor’s degree. I think the years of industry experience had served me well in graduate school. My technical skills were more mature when I started my degree and I had a better idea of what topics I wanted to pursue. It would have been nice to have fully dedicated my time to the master’s program but after a few years of work it’s a hard decision to stop working to go to school full-time. Additionally, as my company was paying for my degree, I did not have the option to take time off. All said and done, I’m glad I decided to go part-time for the degree. A number of times my coursework lined up nicely with my professional work and I was able to apply what I had learned directly to my job.

Despite all of the personal sacrifices, I am still happy with my decision to get a master’s degree. It was quite the achievement but I am also happy that it is finally done. I have been learning what it’s like to have free time again and I am starting to tackle my ever-growing project list.

Spark Core

I’ve been spending some time playing with the Spark Core. This device is an open source ARM-based microcontroller with WiFi on-board. It belongs to the Spark OS ecosystem, which aims to be an easy, secure, and scalable solution for connecting devices to graphical interfaces, web services, and other devices. One interesting feature is how you interact with the Spark Core: it has support for mobile devices (iOS or Android), a Web Integrated Development Environment (IDE), and a command line.

The Spark Core devices (also known as “cores”) function in tandem  with the Spark Cloud service (also called the “cloud”) on the internet. The cloud is responsible for managing your cores, developing the core code, and loading applications on your core. Spark Cloud accounts are free and can be created on the Spark build page. Many cores can communicate with each other through a publish/subscribe messaging system made available through the cloud.

IMG_1340The Spark Core comes in a great package. The box promises that “when the internet spills over into the real world, exciting things happen.” Conveniently, the core comes with a breadboard and a micro USB cable right in the box. This all-inclusiveness makes it ideal for beginners. And it even comes with a sticker!

The easiest way to get your core up and running is to use your mobile device. Simply download the Spark mobile application and connect your mobile device to the same network that the core will use. Turn on your core and make sure it is in listening mode.  Next, use your mobile application to log into your cloud account. You will then be prompted for the network credentials to be used by the core. This will begin a search and registration process where the mobile device finds the core, connects it to the network, and registers the core to your cloud account. The RGB LED on the core shows the status of the internet connection. Once your core is online and registered to your account, you are ready to start playing it!


First, I wanted to try interacting with my core from my mobile device. This can be done using a part of the Spark mobile application called Tinker.

IMG_1346Tinker is more of a prototyping app than it is a dedicated programming environment. It allows you to simulate analog and digital inputs and outputs on the core. Tinker can be integrated with code written for the core so that an application running on your core can interface with the Tinker application on your mobile device.  My experience with Tinker was only so-so as it crashed a number of times on my iPhone 6.

Next, I wanted to try programming my core from the web through the Spark Cloud build website. To do this, I simply logged on to my cloud account which automatically loaded the web IDE. I was curious about how easy it was to import and implement external libraries. To get a feel for this, I tried to connect my core to an LED strip and control it via the Tinker app.

Screen Shot 2014-10-22 at 8.02.45 PMThe web IDE is very clean and easy to use. There are mouse-over tips to help you navigate the environment. The controls (located on the left panel of the IDE) are as follows from top to bottom: flash, verify, save, code, libraries, docs, cores and settings. Double clicking any one of these icons expands and collapses the grey information pane.

The Spark Core language is Arduino compatible as it supports the functions defined in the Arduino language specification. It also includes some extra features that enable you to do things like interact with the network settings and subscribe to specific events from the cloud. Unfortunately, many of the Arduino libraries included in the Arduino IDE have not been implemented for the Spark platform. This may create some problems if you are trying to port your old Arduino code to a core.

Screen Shot 2014-11-05 at 8.58.16 PMIncluding the Adafruit NeoPixel library was very easy. I simply searched the available libraries and clicked the import button for the library I wanted to use. All of the necessary includes were automatically inserted into my code. The library display pane also allowed me to browse and/or import the sample code from the library I selected.

Once my code was complete and verified, I simply clicked the flash button and waited for the cloud to update my core. Success!

IMG_1362Finally, I tried connecting to my core with the Spark Command Line Interface (also called spark-cli). This package is an open source command line tool which uses node.js to program your core. It  works over both WiFi and USB (which is handy when the network is unavailable). The spark-cli tool is not packaged well and was a little tricky to install. After installing node.js, I kept getting compile failures. After some digging I finally got it to work by opening XCode and accepting some license agreements.

The spark-cli tool allows you to interact with your core in a more advanced way. The command line allows you log into the core and read any serial output being generated by the application. It also enables you to manage the application running on a core, such as compiling and uploading new applications or reverting the core to its factory state. Much like Tinker, the spark-cli allows you to simulate both analog and digital input or output. It also enables you to publish and subscribe to events in the cloud so that you can communicate with other cores.

On the hardware front, it is important to note that the internal WiFi chip uses an older version of the 802.11 standard. As the Spark Core uses 802.11b/g, it won’t connect with newer 802.11n networks. I ran into this issue when moving my core between networks. In this case, I had to connect to the core via USB and use a serial connection to enter my network credentials manually. I later discovered that this could also be done via the spark-cli tool.

spark_core_serialStoring all of your code in the Spark Cloud is both a blessing and a curse. Currently, there is no easy way to version your code or to determine what version of a library is available in the web IDE. I fumbled a bit programming the LED strip because I had to dig around to see which version of the NeoPixel library was available. Additionally, Having the code in a private remote location also makes it harder to share code with other people. Because the core is programmed over the internet, it takes longer to program. This can be too time consuming if you are doing rapid iterative development.  On the positive side, remote code storage and programming means that you can easily modify and upload your application to any core from any web browser. This means no more frantic searching for the correct cable, code version, library version and so on.

To give you an idea how the Spark Core stacks up to other ARM-based microcontrollers, I compared it to two other devices in my project box:

Feature Spark Core 1.0 Arduino Due Teensy 3.1
Processor 72 MHz ARM Cortex M3 84 MHz ARM Cortex M3 72 MHz ARM Cortex M4
Memory (Flash) 128KB 512 KB 256 KB
Memory (SRAM) 20 KB 96 KB 64 KB
Voltage 3.3v 3.3v 3.3v
Regulated output voltage 3.3v 3.3v and 5v 3.3v
Cost $39 $50 $20
Size 1.47" x 0.8" 4" x 2.1" 1.4" x 0.7"
Digital pins 18 54 34
Analog pins 8 12 21
5v tolerant input pins 7 0 21
SPI yes yes yes
UART (Tx/Rx) 1 4 3
I2C (SDA/SLC) 1 2 2
JTAG yes yes no
WiFi yes (802.11 b/g) no no
Programming environment Web and Mobile IDE (WiFi), command line (USB or WiFi) Arduino IDE (USB) Arduino IDE + Teensyduino (USB)

The online nature of this device makes it a good choice for people new to Arduino programming. Since the core is internet based, setup is easier than with an Arduino as there are no FTDI drivers to install or serial issues to debug. The RGB LED used for network status is a clever way to assist beginners with debugging connectivity issues. The Spark Core shields are a great starting point for many projects. The Shield Shield makes any Arduino shield compatible with the Spark Core layout, which allows you to take advantage of the large number of Arduino shields already out there. The Spark documentation is very clear and it has a helpful community of users in case you have any questions.

Veteran Arduino programmers can enjoy the advanced features of the Spark OS ecosystem. The distributed nature of the Spark OS makes it simple to connect devices together. The publish/subscribe messaging mechanism allows devices to interact with each other in real time. The RESTful API built into the Spark Cloud makes it easy for any web service to interact with any of your devices on the cloud. On the administrative front, the command line tool gives more power to the user. I was especially pleased that I could use the command line to remotely read the serial output while the core was running.

All in all, I think this is a great board for both beginners and advanced Arduino users. Just like any new device, the Spark Core has some growing pains to work through. Despite that, it offers some great features that make it easy to look past some of the shortcomings.  The on-board WiFi is a real game changer in the hobbyist microcontroller market. I look forward to more internet-enabled projects!