Sep 18, 2014

Friend to friend car sharing service implementation with Kinoma Create

 I joined AT&T Hackathon @ Super Mobility Week - Code for Car & Home at Las Vegas on September 6-7 with my two other team members Rama and Kishore. This hackathon is aimed towards developing new applications for connected car and home industry by utilizing AT&T api’s and IoT hardware and software provided by multiple vendors present here. In the hackathon, we tried to make a friend to friend car sharing system which is more secure and interactive than existing services. We intended to use Kinoma Create platform as in car unit interface that interacts with a driver by visual and audio information. We joined Kinoma Create meetup before the hackathon and Kinoma team lend us their pre-release version to be used with our Hackathon. So, we would like to share our experience in this entry.

Our motivation for the AT&T hackathon

Most people in United states use their car less than 2 hours per day, that is less than 10% utilization rate. At the same time when you need an additional car for a weekend trip or travel outside of home town it is expensive to rent a car. Our concept takes advantage connected car technology to easily share this underutilized car between friends and family. Both parties in this transaction will be benefited as borrower can use car for fraction of rental car cost while owner can use incentive (monetary or points) from this transaction for his future travel. As new P2P car sharing application like getaround, relayrides are getting popular but they will not be useful for sharing cars with friends, family and neighbors. Since these P2P companies takes a portion of rental cost they will not suit sharing scenarios where friends and family will use each others car. In our scenario, we are not considering monetary incentive but this can be easily added as form of points or any other form to our system.

Our major motivation in this project is to develop an application that can help a friend, trusted neighbor or family member to easily request, safely use and return car as per agreed schedule. At the same time our application also aims to provide car owner with tools to learn about borrowers past driving score, giving guidelines about his car usage and report on how his car was handled by borrower. To achieve this we developed friend-to-friend car sharing application to solve technical challenges in car sharing scenarios as discussed here.

Technical challenges for friend to friend car sharing

Next, we explain technical challenges for the car sharing that we tried to solve in the AT&T hackathon based on the scenario above. We tried to solve common issues we see when we share own cars with friends, family and neighbors.

Owner’s problem

  • (security) It is difficult to give car keys to borrowers if owner is not available at the time of pickup.
    • (1) Actual world authentication
      • If the owner is not available at the time of pickup, the owner can’t check borrower’s ID and the borrower lose a chance to share his car. To make a sharing process easy, the owner needs not only online authentication at a time of reservation but also actual world authentication that works remotely.
    • (2) Electronic keys for car and garage 
      • If the owner is not available at the time of pickup, the owner also can’t give keys to the borrower. The owner need a way to provide electronic keys for car and garage from remote.
  • (safety) Owner wants his car to be driven without aggressive driving and return on schedule without any problems.
    • (3) reinforce the borrower with good driving behavior
      • Even though the borrower is reliable person, the borrower might not be safe driver. The owner might want to have a way to monitor the borrower’s driving and reinforce the borrower with good driving behavior. 

Borrower’s problem

  • (usability) It may be difficult for borrowers to drive this car without knowing operation details or local driving etiquette.
    • (4) Less distractive car operation details for a car
      • It is common that we don’t know how to operate a car if we rent different type of car from own car. The borrower wants to operate the car with the same way as his own car.

System overview

Our solution aims to provide two main features for car sharing. First it support automatic borrower authentication through in-home unit. Our in-home unit detects proximity of borrower by bluetooth beacon technology and enables him to access car parked inside a garage. Second we provide authentication and driving related information through in-car unit.

Why we chose Kinoma Create

We planned to use Arduino and Raspberry Pi to implement the in-car unit. Although, when we joined the Kinoma Create meetup, we found out that it supports the following requirements and it is easier to implement the in-car unit. So we decided to use it for the in-car unit.
  • Built in network functionality to communicate with server and mobile application
  • Display and touch interface to implement interactive user interface for a driver

Our impression on Kinoma Create

Kinoma platform is helpful for us to quickly develop a simple prototype for our application due to its hardware programming capabilities, built in connectivity and sample application provided by Kinoma team. Developing application on Kinoma studio is straightforward and easy to port completed application to platform. Programmers who are familiar with Java Script may find it easy to write new applications while Hardware programmers like flexibility of programmable pins. Their examples need to be expanded for network and protocol applications but with little effort we were testing our first applications.


We defined and created a simple car sharing application for friends, family and neighbors. We used multiple hardware platforms. We used Kinoma Create to build our in-car unit quickly with built in network functionality and display and touch interface. Through the prototyping, we saw potential of Kinoma Create for mainly web developer to build IoT application rapidly.


Thank you Kinoma Team for providing pre-release version! Since Kinoma Create is nice product for Javascript programmers to take a step into IoT world. We believe this product will become successful in IoT industry.

Jul 21, 2014

Stepping into a pattern for handling asynchronous operations in Java Script called "JavaScript Promise"

I have been currently working on and Automatic for my prototyping. I had a problem communication between Parse and Automatic web service through HTTP.

Here is my question on stackoverflow.

Parse.Cloud.httpRequest call with HTTP request header
The point is that I sent an HTTP request but any callback functions are not called.

According to the suggested solution, I found the following two mistakes in my code.

  • Wrong callback function
  • I did not respond to a response from a server 

Thanks to the solution, I could learn how to write a callback function for  asynchronous operations in JavaScript.

This is a pattern for handling asynchronous operations in JavaScript that you can see in JavaScript libraries like AngularJS. It is called "JavaScript Promises".

I stepped into the very beginning of asynchronous operations in JavaScript this time. If I found something interesting, I will write it on this blog.

Mar 24, 2014

Completed "Introduction to Databases" of Stanford Univeristy with "distinction"

I completed "Introduction to Databeses" of Stanford Univeristy with "Distinction", Yeah! This entry explains how this online course looks like.

Why did I participated in this course?

I was embedded software engineer and seldom use databases. Although, I was transferred to the Business Intelligence company as temporally trainee last October. Then, I often use several relational databases recently. So, I became to think I need to study database again. At that time, I found this course. Studying with book alone is a little bit hard, So I took this course.

Topics that this course covers

As I expected this course focuses on relational database theory, including SQL or table design, but they also cover non relational database including XML and JSON.
  • Relational Database Overview
  • XML Data
  • JSON Data
  • Relational Algebra
  • SQL
  • Relational Design Theory
  • Querying XML
  • Unified Modeling Language
  • Indexes
  • Transactions
  • Constraints and Triggers
  • Views
  • Authorization
  • Recursion
  • Online Analytical Programming
  • NoSQL Systems

How to participate in this course

In this course, every material became public from the beginning. You can start from any topic. Although, deadline of each assignment work is different. So, you should pick a topic that deadline is close first.

For each topic, you repeat the following procedure:
  • Watch some short videos that explain topics
  • Answer simple quizes(No score here)
  • Try to basic problems
    • multi-selection question(low score)
    • Programming work(middle score)
  • Try to challenge problems
    • Advanced programming work(high score)


When you answer each question, it is automatically scored. You can see how much percentage of scores you already got. If you solve all basic problems, you get 50 % and can complete the course. If you also try challenge problems and get 75 %, you can get "State of Accomplishment with Distinction".

Oh, everything is English!

It is really hard point for Japanese people. I actually worked in the office where no one speaks in English but now my all coworkers speak in English. So, no problems with this for me now.

Although, I felt difficulty when a teacher talked really fast about a topic that I was not familiar with. At the time, I had to watch the videos again and again. As a result, I tried to understand by solving some problems. Or sometimes I research some Japanese materials related to the topics.

How hard it was?

It depends on your skill and knowledge. It seems that it takes 3 hours every week to complete the course. You need 7 to 8 hours every week to complete with distinction. I sometimes had to study all day on Saturday or Sunday. If you work full time, you cannot take more than one course at the same time.

The important point to take an online course

If every material becomes public from the beginning, you should consider your private schedule and make a concrete schedule for the course. You might be struggling with difficult problems, so you should have some time to spare.

You should not postpone challenge problems even though their deadlines are late because you might forget about details when you are back to the problems :)


Have fun with an online course!!

Mar 23, 2014

"Fluentd: Open Source Log Management" by Sadayuki Furuhashi, Treasure Data at SF Metric Meetup

Sadayuki Furuhashi talks about Fluentd: Open Source Log Management from Librato on Vimeo.

This is a participation report for the SF Meetup. This meetup is probably data analysis for server administration. A log management is really far from my current job but I have been interested in fluentd since I knew that it is good approach to make data collection easy. And this is a good chance to hear about it from the founder. So, I participated in this.

About SF Metric Meetup

In the 21st Century successful teams are data-driven. This monthly meetup provides a forum for monitoring geeks to gather and trade new ideas, data, and war stories. If you love data, this meetup is for you! (cited from

About the speaker

Sadayuki Furuhashi is an architect and founder of Treasure Data. He is also a founder of some open source projects including fluentd.

About fluentd

Fluentd is extensive log management tool written by Ruby. Nowadays, the major web tech companes produce millions of lines of logs every day and their analysis sever operational status with the logs. Fluentd makes this collection/shape/analysis process easier. 

What they tried to do in this meetup

This time he came to the meetup with his coworkers of Treasure Data. I talked with some his coworkers about their business. They are trying to make his open source projects, including Fluentd, MessagePack popular through some meetups, before providing charged service. I realized that it is important to have this kind of networking opportunity and talk about own products to potential customers. I keep paying attention to their products and services.

Remote debugging for Tomcat and IntellJ on Max OS X

IntellJ supports remote debugging for Tomcat. It is really convenient for debugging. This entry shows how to adopt remote debugging with Tomcat and IntellJ.

My dev environment

  • Max OS X 10.9.2
  • Java 1.8.0
  • Tomcat 8.0.3

Set up

  1. Create a new file called <Tomcat install directory>/bin/ (This file will be automatically read by
  2. Add the following content to 8080 is a port number for Tomcat. Please specify for your tomcat.
    • CATALINA_OPTS="-agentlib:jdwp=transport=dt_socket,address=8080,server=y,suspend=n"
  3. Start Tomcat normally using

  1. Select "Run -> Edit Configuration" then "Run/Configuration" screen is open.
  2. Specify "Name" like "Tomcat".
  3. Specify "Port", which is the same number as Tomcat's port numer


  1. If you want to start debugging on IntellJ, just select "Run -> debug 'Tomcat'". 
  2. If IntellJ shows a message like below, you are already connected.
    • Connected to the target VM, address: 'localhost:8080', transport: 'socket'
  3. Now you can debug. Please refer here for more detail about debugging by IntellJ.

Feb 18, 2014

Apache Lucene: Then and Now, Java User Group meetup at Twitter HQ - Apache Lucene: Then and Now

This time, Doug Cutting (@cutting) talked about the history of Apache Lucene and how Apache Lucene is used for the implementation of Internet search engines and local, single-site searching in major tech companies like Linkedin or Twitter. He also mentioned this project is integrated with Hadoop and still evolving.

Doug Cutting (@cutting) is the founder of numerous successful open source projects, including Lucene, Nutch, Avro, and Hadoop. Doug joined Cloudera in 2009 from Yahoo!, where he was a key member of the team that built and deployed a production Hadoop storage and analysis cluster for mission-critical business analytics. Doug holds a Bachelor’s degree from Stanford University and sits on the Board of the Apache Software Foundation. (cited from

His talk is based on the following blog entries.

As you see, his roles in his company (Cloudara) is the implementation of the new open source project, Blur.
Blur is an Apache Incubator project that provides distributed search functionality on top of Apache Hadoop, Apache Lucene, Apache ZooKeeper, and Apache Thrift. When I started building Blur three years ago, there wasn’t a search solution that had a solid integration with the Hadoop ecosystem. Our initial needs were to be able to index our data using MapReduce, store indexes in HDFS, and serve those indexes from clusters of commodity servers while remaining fault tolerant. Blur was built specifically for Hadoop — taking scalability, redundancy, and performance into consideration from the very start — while leveraging all the great features that already exist in the Hadoop stack. (cited from
And Cloudera is providing a better way for non-programming users interact with Hadoop data.
In the context of our platform, CDH (Cloudera’s Distribution including Apache Hadoop), Cloudera Search is another framework much like MapReduce and Cloudera Impala. It’s another way for users to interact with Hadoop data and for developers to build Hadoop applications. Each framework in our platform is designed to cater to different families of applications and users (cited from
 See Cloudera blog for more details.

It seems that there are meetup of Java user group in SF once or twice a month. I am planning to continue joining meetup.

Jan 19, 2014

JSON Lint - validation tool for JSON

There are several validation tools for JSON. These tools help you to write and test JSON data or schema.

  • JSONLint
    • This tool validates that JSON data is valid or not.
    • It does not deal with JSON schema.
  • JSON Schema Lint
    • This tool validates
      • JSON schema is valid
      • JSON data is valid
      • JSON data is valid against JSON schema

Jan 12, 2014

INSERT VS COPY: The fastest way to do a bulk insert into PostgreSQL

It takes time to populate lots of data in database by many "INSERT" statements. The official PostgreSQL documentation mentions the topic.

PostgreSQL 9.3.2 Documentation Chapter 14. Performance Tips 14.4. Populating a Database

The chapter shows many options, but I am paying attention to first two.

Disable autocommit

PostgreSQL commits each statement automatically. That means if you run an INSERT statement, it runs like this for each statement:

  1. Open a transaction
  2. Insert data
  3. Close the transaction

It is redundant. In this case, it becomes faster if you use autocommit like this:

BEGIN; -- the beginning of the transaction
END;    -- the end of the transaction

PostgreSQL does not commit the statements between BEGIN and END.


COPY is also to use populate data, but the different point is that COPY does not offer autocommit. It means that PostgreSQL commit after all data is populated on a database.
Please refer to the document about how to use COPY statement.

Auto increment primary key in PostgreSQL

If you want to increment primary key automatically in Postgres, "serial" is a good option.

Please that you do not need to specify a primary key to insert data to your database.