<![CDATA[Technically Correct]]>http://localhost:2368/http://localhost:2368/favicon.pngTechnically Correcthttp://localhost:2368/Ghost 1.25Fri, 17 Aug 2018 16:42:47 GMT60<![CDATA[GSoC 2018 - Batteries Included!]]>

Much time has passed and much code has been written since my last update. Adaptive Streaming (a better name TBD) for Ardupilot is nearly complete and brings a whole slew of features useful for streaming video from cameras on robots to laptops, phones, and tablets:

  • Automatic quality selection based on
]]>
http://localhost:2368/2018/07/22/gsoc-2018-batteries-included/5b72faa60057007605e22d52Sun, 22 Jul 2018 17:00:00 GMTGSoC 2018 - Batteries Included!

Much time has passed and much code has been written since my last update. Adaptive Streaming (a better name TBD) for Ardupilot is nearly complete and brings a whole slew of features useful for streaming video from cameras on robots to laptops, phones, and tablets:

  • Automatic quality selection based on bandwidth and packet loss estimates
  • Options to record the live-streamed video feed to the companion computer (experimental!)
  • Fine tuned control over resolution and framerates
  • Multiple camera support over RTSP
  • Hardware-accelerated H.264 encoding for supported cameras and GPUs
  • Camera settings configurable through the APWeb GUI

Phew!

The configuration required to get everything working is minimal once the required dependencies have been installed. This is in no small part possible thanks to the GStreamer API which took care of several low level complexities of live streaming video over the air.

Streaming video from aerial robots is probably the most difficult use case of Adaptive Streaming as the WiFi link is very flaky at these high speeds and distances. I've optimised the project around my testing with video streaming from quadcopters so the benefits are passed on to streaming from other robots as well.

Algorithm

I've used a simplification of TCP's congestion control algorithm for Adaptive Streaming. I had looked at other interesting approaches including estimating receiver buffer occupancy, but using this approach didn't yield significantly better results. TCP's congestion control algorithm avoids packet loss by mandating ACKs for each successfully delivered packet and steadily increasing sender bandwidth till it reaches a dynamically set threshold.

A crucial difference for Adaptive Streaming is that 1) we stream over UDP for lower overhead (so no automatic TCP ACKs here!) 2) H.264 live video streaming is fairly loss tolerant so it's okay to lose some packets instead of re-transmitting them.

Video packets are streamed over dedicated RTP packets and Quality of Service (QoS) reports are sent over RTCP packets. These QoS reports give us all sorts of useful information, but we're mostly interested in seeing the number of packets loss between RTCP transmissions.

On receiving a RTCP packet indicating any packet loss, we immediately shift to a Congested State (better name pending) which significantly reduces the rate at which the video streaming bandwidth is increased on receiving a lossless RTCP packet. The encoding H.264 encoding bitrate is limited to no higher than 1000kbps in this state.

Once we've received five lossless RTCP packets, we shift to a Steady State which can encode upto 6000kbps. In this state we also increase the encoding bitrate at a faster rate than we do in the Congested State. A nifty part of dynamically changing H.264 bitrates is that we can also seamlessly switch the streamed resolution according to the available bandwidth, just like YouTube does with DASH!

This algorithm is fairly simple and wasn't too difficult to implement once I had figured out all the GStreamer plumbing for extracting packets from buffers. With more testing, I would like to add long-term bitrate adaptations for the bitrate selection algorithm.

H.264 Encoding

This is where we get into the complicated and wonderful world of video compression algorithms.

Compression algorithms are used in all kinds of media, such as JPEG for still images and MP3 for audio. H.264 is one of several compression algorithms available for video. H.264 takes advantage of the fact that a lot of the information in video between frames is redundant, so instead of saving 30 frames for 1 second of 30fps video, it saves one entire frame (known as the Key Frame or I-Frame) of video and computes and stores only the differences in frames with respect to the keyframe for the subsequent 29 frames. H.264 also applies some logic to predict future frames to further reduce the file size.

This is by no means close to a complete explanation of how H.264 works, for further reading I suggest checking out Sid Bala's explanation on the topic.

The legendary Tom Scott also has a fun video explaining how H.264 is adversely affected by snow and confetti!

The frequency of capturing keyframes can be set by changing the encoder parameters. In the context of live video streaming over unstable links such as WiFi, this is very important as packet loss can cause keyframes to be dropped. Dropped keyframes severely impact the quality of the video until a new keyframe is received. This is because all the frames transmitted after the keyframe only store the differences with respect to the keyframe and do not actually store a full picture on their own.

Increasing the keyframe interval means we send a large, full frame less frequently, but also means we would suffer from terrible video quality for an extended period of time on losing a keyframe. On the other hand, shorter keyframe intervals can lead to a wastage of bandwidth.

I found that a keyframe interval of every 10 frames worked much better than the default interval of 60 frames without impacting bandwidth usage too significantly.

Lastly, H.264 video encoding is a very computationally expensive algorithm. Software-based implementations of H.264 such as x264enc are well supported with several configurable parameters but have prohibitively high CPU requirements, making it all but impossible to livestream H.264 encoded video from low power embedded systems. Fortunately, the Raspberry Pi's Broadcom BCM2837 SoC has a dedicated H.264 hardware encoder pipeline for the Raspberry Pi camera which drastically reduces the CPU load in high definition H.264 encoding. Some webcams such as the Logitech C920 and higher have onboard H.264 hardware encoding thanks to special ASIC's dedicated for this purpose.

Adaptive Streaming probes for the type of encoding supported by the webcam and whether it has the IOCTL's required for changing the encoding parameters on-the-fly.

H.264 has been superseded by the more efficient H.265 encoding algorithm, but the CPU requirements for H.265 are even higher and it doesn't enjoy the same hardware support as H.264 does for the time being.

GUI

The project is soon-to-be integrated with the APWeb project for configuring companion computers. Adaptive Streaming works by creating an RTSP Streaming server running as a daemon process. The APWeb process connects to this daemon service over a local socket to populate the list of cameras, RTSP mount points, and available resolutions of each camera.

GSoC 2018 - Batteries Included!

The GUI is open for improvement and I would love feedback on how to make it easier to use!

Once the RTSP mount points are generated, one can livestream the video feed by entering in the RTSP URL of the camera into VLC. This works on all devices supporting VLC. However, VLC does add two seconds of latency to the livestream for reducing the jitter. I wasn't able to find a way to configure this in VLC, so an alternative way to get a lower latency stream is by using the following gst-launch command in a terminal:

gst-launch-1.0 playbin uri=<RTSP Mount Point> latency=100

In the scope of the GSoC timeline, I'm looking to wind down the project by working on documentation, testing, and reducing the cruft from the codebase. I'm looking forward to integrating this with companion repository soon!

Links to the code

https://github.com/shortstheory/adaptive-streaming

https://github.com/shortstheory/APWeb

Title image from OscarLiang

]]>
<![CDATA[GSoC 2018 - New Beginnings]]>

I'm really excited to say that I'll be working with Ardupilot for the better part of the next two months! Although this is the second time I'm making a foray into Open Source Development, the project at hand this time is quite different from what I had worked on in

]]>
http://localhost:2368/2018/06/05/gsoc-2018-new-beginnings/5b72fa170057007605e22d4cTue, 05 Jun 2018 13:00:00 GMTGSoC 2018 - New Beginnings

I'm really excited to say that I'll be working with Ardupilot for the better part of the next two months! Although this is the second time I'm making a foray into Open Source Development, the project at hand this time is quite different from what I had worked on in my first GSoC project.

Ardupilot is an open-source autopilot software for several types of semi-autonomous robotic vehicles including multicopters, fixed-wing aircraft, and even marine vehicles such as boats and submarines. As the name suggests, Ardupilot was formerly based on the Arduino platform with the APM2.x flight controllers which boasted an ATMega2560 processor. Since then, Ardupilot has moved on to officially supporting much more advanced boards with significantly better processors and more robust hardware stacks. That said, these flight controllers contain application specific embedded hardware which is unsuitable for performing intensive tasks such as real-time object detection or video processing.

GSoC 2018 - New Beginnings

CC Setup with a Flight Computer

APSync is a recent Ardupilot project which aims to ameliorate the limited processing capability of the flight controllers by augmenting them with so-called companion computers (CCs). As of writing, APSync officially supports the Raspberry Pi 3B(+) and the NVidia Jetson line of embedded systems. One of the more popular use cases for APSync is to enable out-of-the-box live video streaming from a vehicle to a laptop. This works by using the CC's onboard WiFi chip as a WiFi hotspot to stream the video using GStreamer. However, the current implementation has some shortcomings which are:

  • Only one video output can be unicasted from the vehicle
  • The livestreamed video progressively deteriorates as the WiFi link between the laptop and the CC becomes weaker

This is where my GSoC project comes in. My project is to tackle the above issues to provide a good streaming experience from an Ardupilot vehicle. The former problem entails rewriting the video streaming code to allow for sending multiple video streams at the same time. The latter is quite a bit more interesting and it deals with several computer networks and hardware related engineering issues to solve. "Solve" is a subjective term here as there isn't any way to significantly boost the WiFi range from the CC's WiFi hotspot without some messy hardware modifications.

What can be done is to degrade the video quality as gracefully as possible. It's much better to have a smooth video stream of low quality than to have a high quality video stream laden with jitter and latency. At the same time, targeting to only stream low quality video when the WiFi link and the processor of the CC allows for better quality is inefficient. To "solve" this, we would need some kind of dynamically adaptive streaming mechanism which can change the quality of the video streamed according to the strength of the WiFi connection.

My first thought was to use something along the lines of Youtube's DASH (Dynamically Adaptive Streaming over HTTP) protocol which automatically scales the video quality according to the available bandwidth. However, DASH works in a fundamentally different way from what is required for adaptive livestreaming. DASH relies on having the same video pre-encoded in several different resolutions and bitrates. The server estimates the bandwidth of its connection to the client. On doing so, the server chooses one of the pre-encoded video chunks to send to the client. Typically, the server tries to guess which video chunk can deliver the best possible quality without buffering.

Youtube's powerful servers have no trouble encoding a video several times, but this approach is far too intensive to be carried out on a rather anemic Raspberry Pi. Furthermore, DASH relies on QoS (short for Quality of Service which includes parameters like bitrate, jitter, packet loss, etc) reports using TCP ACK messages. This causes more issues as we need to stream the video down using RTP over UDP instead of TCP. The main draw of UDP for livestreaming is that performs better than TCP does on low bandwidth connections due to its smaller overhead. Unlike TCP which places guarantees on message delivery through ACKs, UDP is purely best effort and has no concept of ACKs at the transport layer. This means we would need some kind of ACK mechanism at the application layer to measure the QoS.

Enter RTCP. This is the official sibling protocol to RTP which among other things, reports packet loss, cumulative sequence number received, and jitter. In other words - it's everything but the kitchen sink for QoS reports for multimedia over UDP! What's more, GStreamer natively integrates RTCP report handling. This is the approach I'll be using for getting estimated bandwidth reports from each receiver.

I'll be sharing my experiences with the H.264 video encoders and hardware in my next post.

]]>
<![CDATA[Be The Opposite]]>

"The Opposite" (S05E21) was a watershed episode for Seinfeld. Not only did the series' direction change hands from Tom Cherones to Andy Ackerman after it, but the plots also became more focussed on the personality flaws in the characters rather than the bizarre situations they got themselves into.

]]>
http://localhost:2368/2017/12/28/be-the-opposite/5b72f18d0057007605e22d48Thu, 28 Dec 2017 04:59:00 GMT

"The Opposite" (S05E21) was a watershed episode for Seinfeld. Not only did the series' direction change hands from Tom Cherones to Andy Ackerman after it, but the plots also became more focussed on the personality flaws in the characters rather than the bizarre situations they got themselves into. "The Opposite" was probably the only episode in which these two tropes converged, making for something quite unlike any other Seinfeld episode.

But that's not what I'm going to write about in this blog post. "The Opposite" has personal meaning for other reasons. As a short summary of the A-plot of this episode - George, a stout, balding, middle-aged man with no prospects with either a career or with the opposite sex, decides to act contrary to every instinct he's ever had. In a span of less than a day after doing so, he ends up with a new partner, a job with the New York Yankees, and finally gets the means to move out from his much beleaguered parents' house.

Though contrived for comical purposes, this episode holds something very profound under its veil of light-hearted comedy. The series goes to show that even when it's painfully obvious that you're clearly doing something wrong, it can be impossible to get out of the self-induced loop of listening to your instincts.

Jerry also drives this home with an understatedly profound remark in this episode, "If every instinct you have is wrong...then the opposite would have to be right!".

Perhaps it's a primal urge to follow your instincts, after all, your instincts are a product of your past experiences which have kept you alive. On the other hand, I feel a lot of it also has to do with getting into the comfort zone of being able to challenge your own perceptions. When you've been doing something wrong unconsciously, it becomes much harder to accept the fact that you were doing it wrong later on. Your brain accustoms itself to behaving in a certain way to a stimulus and left unchecked, this becomes a part of your personality. This is why I think shifting out of self-harmful behavior is so hard.

I've been a victim of falling to the trap of my own instincts ever since I joined college. Through a lot of time spent brooding and many moments of soul-searching through my daily diary entries, I was exactly aware of what I was unhappy with in life and what I had to do to fix it. Most of my problems could be traced down to a lack of self-esteem, but I was more than willing to wallow in pity than do anything about it. It was only a few months ago I decided to improve things by increasing physical activity and social exposure. There were many times I had a familiar nagging of doubt, but this too subsided as soon as I saw things can actually get better by not listening to yourself.

There is a poignant quote I've saved from Anne Lammot's Bird by Bird about not listening to the radio station in your head - the radio station KFKD, or K-Fucked:

"If you are not careful, station KFKD will play in your head twenty-four hours a day, nonstop, in stereo. Out of the right speaker in your inner ear will come the endless stream of self-aggrandizement, the recitation of one’s specialness, of how much more open and gifted and brilliant and knowing and misunderstood and humble one is. Out of the left speaker will be the rap songs of self-loathing, the lists of all the things one doesn’t do well, of all the mistakes one has made today and over an entire lifetime, the doubt, the assertion that everything that one touches turns to shit, that one doesn’t do relationships well, that one is in every way a fraud, incapable of selfless love, that one has no talent or insight, and on and on and on. You might as well have heavy-metal music piped in through headphones while you’re trying to get your work done."

Maybe the best way out lies in our mistakes, or rather, the opposite of them.

george

]]>
<![CDATA[ThinkPad 13: Thoughts after 18+ months]]>

It's been over a year since I switched laptops and I want to share my thoughts on my current one - the Lenovo ThinkPad 13 Gen 1. Before getting this laptop, I had been using a Lenovo IdeaPad G580 which I had gotten back in 2013. While that laptop suited

]]>
http://localhost:2368/2017/12/10/thinkpad-13-thoughts-after-18-months/5b72f11a0057007605e22d44Sun, 10 Dec 2017 04:59:00 GMT

It's been over a year since I switched laptops and I want to share my thoughts on my current one - the Lenovo ThinkPad 13 Gen 1. Before getting this laptop, I had been using a Lenovo IdeaPad G580 which I had gotten back in 2013. While that laptop suited my needs at the time, the unwieldy 15.6" display, sub-standard 2 hrs battery life, and weight of 2.8kg just wasn't cutting it for college. I decided to go for a smaller laptop with good battery life to replace it. I was pretty satisfied with the performance of the IdeaPad so anything as good or better than its i5-3230M was enough for me.

The first laptop which I looked at was the Asus UX305UA which ticked most of my boxes. However, some reviews had stated that its build quality was questionable so I decided to hold out for a better laptop. I later stumbled upon the Lenovo ThinkPad T460(s) which looked perfect for my needs but was just a touch out of my budget. While I was considering this, news began to surface about a 13.3" ThinkPad releasing soon, later to be known as the ThinkPad 13.

It wasn't an easy decision between the T460s and the 13. The T460s was definitely a more charming laptop but there were just too many drawbacks for it to be a serious contender. I found several negative reviews regarding the battery life and the 1080p display lottery. The ThinkPad 13 wasn't without issues of its own, but there wasn't a single review that hadn't sung praises about its keyboard, battery life, and overall value proposition. Plus it came with a USB Type-C port, dual upgradeable RAM slots, and was at least $200 cheaper than a comparatively specced T460s.

I got the silver version of the ThinkPad 13. Die-hard ThinkPad fans may condemn a non-black ThinkPad, but I think it looks great. Plus, the silver version came with the free 1080p IPS display upgrade. It also has a better choice of materials with an aluminium lid in place of the ABS plastic lid found on the black version of the ThinkPad 13.

I configured my laptop with the following specs for $720 (₹46500):

  • Intel Core i5-6300U Processor (3MB Cache, up to 3.00GHz)
  • Windows 10 Home 64 English
  • 8GB DDR4-2133 SODIMM
  • Intel HD Graphics 520
  • KYB SR ENG
  • 3cell 42Wh
  • UltraNav (TrackPoint and ClickPad) without Fingerprint Reader
  • Software TPM Enabled
  • 720p HD Camera
  • 256 GB Solid State Drive, SATA3 OPAL2.0 - Capable
  • 45W AC Adapter - US(2pin)
  • Intel Dual Band Wireless-AC(2x2) 8260, Bluetooth Version 4.1 vPro
  • 13.3" FHD (1920 x 1080) IPS Anti-Glare, 220 nits

I don't think the Windows 10 version is available in India presently. I believe a watered down Chromebook version is available on Amazon.

Since I was saving a fair bit of money by not buying the T460s, I maxed out the CPU with the i5-6300U though I don't ever see myself using vPro. The other upgrades were the RAM and SSD to 8GB and 256GB respectively.

The ThinkPad 13 Gen 2 and Gen 1 share the same chassis, so most of the parts of this review regarding build quality, I/O, and keyboard apply for the Gen 2 as well.

Build Quality

Pretty decent. At 1.4kg it's not the smallest or lightest 13.3" laptop available, but it's small enough to not matter. Given the number of I/O ports onboard, it's a very acceptable compromise.

You can immediately tell it isn't made from the fancy composites of the X1 Carbon or the magnesium alloy of the T/X/P series ThinkPads. Yet the plastic frame of the laptop is good quality and can easily stand out on its own. Compared to other ThinkPads I've tried, it feels considerably better built than the E470 and the X1 Yoga and is on par, if not better than the T440p. The P50 is more solid, though it must be kept in mind that the P50 is a much thicker laptop than the 13. The 13's build quality is on another level compared to the Lenovo IdeaPad G580 I had been using before this. Even after a year of heavy use, creaking on twisting the frame is non-existent and nothing feels like it's on the verge of falling apart like it did with the IdeaPad.

The hinges are nice and tight and the lid gives a satisfying thump on closing. Speaking of which, the aluminium lid is solid and protects the display nicely. However, it does give the laptop a heterogeneous look and feel with the rest of the frame being made of plastic.

One aside here, the 13 does have unique touches not found on higher end models. For example, the "i's" in the ThinkPad logo in both the lid and the palm-rest glow and the ThinkPad logo has a really nice brushed finish to it on the lid.

Opening the laptop for servicing and upgrading is not a good experience. The bottom base is held by 10 captive screws and is incredibly fiddly to pry open. I ended up scratching off some of the paint of the bottom cover in the two times I tried opening the base. That said, upgradability is much better than what you can find on comparable ultrabooks. Both DDR4 RAM slots, the M.2 SATA SSD (no NVMe on the Gen 1 unfortunately), the Wi-Fi card, and battery can be easily replaced.

The touchpad has its share of mechanical issues too. It is a clickpad design, similar to that found on MacBooks. Sometimes dust and debris would find its way in the gap between the base of the laptop and the touchpad causing the right button to physically stop clicking. It's a simple problem to fix and all it needs is a swab with a piece of thick paper under the touchpad. Nevertheless, I expected a more robust design. At least this problem isn't exclusive to the 13 as it seems to happen to the T450s as well.

Lastly, the plastic construction does have some drawbacks when it comes to scratches. A small section of the lid joining it to the hinges is plastic and is much more scratched up than the aluminium part of it. After a year, the base unit also had some paint rubbing off from the edges. I would suggest buying a carrying sleeve to avoid this from happening. The silver palmrests also have taken a slightly darker shade to the rest of the laptop.

Fortunately, all the damage is only cosmetic and the laptop has held up pretty well after being jostled around campus for more than a year.

I/O

Connectivity is a strong point for the ThinkPad 13. It's impressive to see how many ports there are on a laptop of this size! You get 3 USB-A ports running at USB 3.0 speeds, a full size HDMI port, an SD card reader in which the card goes all the way in, a headphone jack, and a USB-C port supporting charging and 2160p/60fps video out. There's also Lenovo's proprietary OneLink+ docking solution though I haven't tried this out yet.

On the subject of the USB-C port, a minor nitpick is that it's USB 3.1 Gen1 and doesn't support Thunderbolt. I still much prefer it to the option of having a full size Ethernet port as on the T460s. The USB-C port is so much more flexible.

I also discovered that all the USB ports support fast (faster?) charging. My phone charges as fast from the laptop's USB port as it does from a 5V/1.5A charger.

Wireless connectivity is serviced by the Intel 8260 Wi-Fi card, the same one used on all the top-spec 2016 ThinkPads. Speed and connection quality is good and it supports 802.11ac.

Display

You can't really go wrong with a 1080p IPS display on a 13.3" laptop.

The LG 1080p IPS display on ThinkPad 13 comes with a matte coating. Even for an IPS display, the viewing angles are excellent. 13.3" and 1080p is a sweet spot for sharpness and UIs scale fine in both KDE Neon and Windows 10. Thanks to the matte coating, the screen brightness is perfectly sufficient 90% of the time when I use it. I've hardly run into any instances where I felt the need to crank the screen brightness any higher.

Colour space coverage is reportedly below average for an IPS screen and it really shows when put side by side with a MacBook Pro or a Dell Inspiron 7460. That said, for coding and web-browsing, it's hard to ask for more.

Battery Life

The ThinkPad 13 is powered by a 42 Wh battery. It has degraded slightly from when I got the laptop - initially the battery was able to hold 46 Wh of charge but now it usually holds 40-41.5 Wh when full. I usually run several Firefox tabs open with some IDE's like VS Code, Atom, or Android Studio on KDE Neon. The laptop is very energy efficient and power draw rarely climbs above 6.5W at 30% brightness after enabling TLP. My old laptop used to idle at 17W and when people claim that Intel has gimped their CPUs since the IvyBridge era, I think they're disregarding the huge leaps in power efficiency.

In more than a year of owning it, I haven't once drained the battery. I've even gotten through 9-5 days at a summer internship solely off battery power. I typically get around 6-8 hours of battery life depending on the usage. I can't speak of how good or bad battery life is on Windows 10 as I've never used it for anything other than gaming, which brings me to my next section.

Performance

Subjectively, it's quite snappy. A lot of the performance improvements compared to the laptop I owned before this can be chalked up to the SSD. The one used in the ThinkPad 13 I got was the LiteOn L8H-256V2G and it gives about 540MB/sec in sequential reads and 300MB/sec in sequential writes. It's common knowledge how much faster SSDs are than HDDs, but I hadn't expected it the difference in swapping performance to be as enormous as it is. I can hardly tell when Linux starts filling up the swap space on the SSD as the degradation in performance is minimal.

The i5-6300U is sufficient for my needs and thermals in the ThinkPad 13 are good enough that the laptop doesn't need to throttle even under prolonged loads. The processor cools passively most of the time and the fan only kicks in under load. Even so, I'm hard pressed to hear the fan below 4000RPM. The CPU is able to overclock to 2.9 GHz on both cores but I've rarely seen it climb over 2.8 GHz. I'm not sure if this is due to the 15W TDP of the SoC or the laptop's conservative policy when it comes to cooling as core temperatures never climb over 75C on Prime95.

As far as programming goes, it's not struggled with anything I've thrown at it - from building large C++ codebases to running instances of IntelliJ and Android Studio (with the x86 Android emulator) together. The CPU is definitely faster than the i5-3230M in my old laptop.

Plus, for the first time, gaming isn't a terrible experience on integrated graphics if your expectations are low enough. I can get 30fps in Skyrim Special Edition at 720p and Medium graphics presets with 2X AA. DiRT3 and Mirror's Edge gave about 60fps at 720p with low settings. Sonic Mania works perfectly at 1080p 60fps. On the other hand, it struggled with returning a framerate greater than 30fps on Sonic Generations. Bear in mind that all this is with single channel 8GB RAM. Performance could be better with dual channel RAM.

Load times in all games are much lesser than they would be on a gaming console such as the PS4 thanks to the SSD.

Overall, as long as you don't mind running old games at 720p with some sacrifice in visual quality, the integrated Intel HD 520 should be adequate. At least it's good at smooth 4K video playback.

Keyboard

Really good! For years, I used to think that the AccuType keyboard on my Lenovo G580 was unbeatable, but that changed as soon as I laid my hands on the ThinkPad 13's keyboard. The keys are perfectly spaced, have amazingly deep travel, and have great texture. I really like the layout too - PgUp/PgDn + Ctrl makes it very easy to switch tabs and Fn and the left Ctrl can be swapped in the BIOS. I don't miss the lack of keyboard backlighting but I would really appreciate something like a ThinkLight found in older ThinkPad's.

It turns out that the keyboard on the 13 is exactly the same as the one on the T460s. I learned this by ruining the stock keyboard when doing my weekly round of sterilising all my electronic devices. AliExpress didn't have the silver 13 keyboard but the 13's FRU manual showed that the T460s keyboard (albeit black instead of silver) could be directly dropped in. After a month of waiting to get it shipped from China, it was a simple 10 minute job to replace the keyboard. Props to Lenovo for reusing FRUs and keeping the 13 so serviceable.

I'm pretty happy with the touchpad as well. It has a smooth surface and has a very satisfying physical click to it. Gestures work well and it's supported out of the box on Linux. The TrackPoint and click buttons have become indispensable for programming. However, I'm disappointed about how fast the TrackPoint's coating wears off as it becomes really difficult to use the shallow TrackPoint once this happens. ThinkPads really should ship with some replacement caps.

Linux Support

I've been using an Ubuntu derivative distro on my laptop ever since I got it. In the first few months, I used Kubuntu 16.04 and then later on I switched to KDE Neon (which, incidentally, is based on Ubuntu 16.04). Linux support is remarkable and everything "just works" right out of the box. Older versions of the 4.x Linux kernel had some issues with the touchpad "desyncing" every 15 minutes, but this has by and large been fixed over the last few months. Thanks to the excellent Mesa drivers, 3D performance of the Intel HD 520 is very stable and actually works better than my friend's NVidia Quadro GPU did with Nouveau drivers (not in terms of frame rate, but stability). The Trackpoint works well and all the function keys work perfectly. Also, as mentioned earlier, battery life under Linux is great. I'm curious to see if the OneLink+ works and if the USB-C can output video on Linux.

In my opinion, there's nothing you're missing by running Linux on the 13 instead of Windows when it comes to hardware support.

Conclusion

The ThinkPad 13 is a nice laptop, but it's not for everyone. It fits my workflow perfectly with the good keyboard, display, and build quality. However, not everyone needs as many I/O ports as the 13 has and they would probably be better served by the numerous ultrabooks with better displays and bigger batteries. On the other hand, some might jump for a higher-end ThinkPad model like the T460s/T470s with better build quality and connectivity options. That said, the ThinkPad 13 does a lot of things right and fills a niche in the market for small laptops with good keyboards and several ports. A lot of compromises made in the ThinkPad 13 are reasonable and if you can live with these compromises, it definitely is worthy of consideration for your next laptop.

]]>
<![CDATA[Akademy 2017]]>

Last month, I attended KDE's annual conference, Akademy 2017. This year, it was held in Almeria, a small Andalusian city on the south-east coast of Spain.

The name of the conference is no misspelling, it's part of KDE's age old tradition of naming everything to do with KDE with a

]]>
http://localhost:2368/2017/08/17/akademy-2017/5b72ef510057007605e22d3eThu, 17 Aug 2017 14:30:00 GMTAkademy 2017

Last month, I attended KDE's annual conference, Akademy 2017. This year, it was held in Almeria, a small Andalusian city on the south-east coast of Spain.

The name of the conference is no misspelling, it's part of KDE's age old tradition of naming everything to do with KDE with a 'k'.

It was a collection of amazing, life-changing experiences. It was my first solo trip abroad and it taught me so much about travel, KDE, and getting around a city with only a handful of broken phrases in the local language.

Travel

My trip began from the recently renamed Kempegowda International Airport, Bangalore. Though the airport is small for an international airport, the small size of the airport works to its advantage as it is very easy to get around. Check-in and immigration was a breeze and I had a couple of hours to check out the loyalty card lounge, where I sipped soda water thinking about what the next seven days had in store.

The first leg of my trip was on a Etihad A320 to Abu Dhabi, a four hour flight departing at 2210 on 20 July. The A320 isn't particularly unique equipment, but then again, it was a rather short leg. The crew onboard that flight seemed to be a mix of Asian and European staff.

Economy class in Etihad was much better than any other Economy class product I'd seen before. Ample legroom, very clean and comfortable seats, and an excellent IFE system. I was content looking at the HUD-style map visualisation which showed the AoA, vertical speed, and airspeed of the airplane.

On the way, I scribbled a quick diary entry and started reading part 1 of Sanderson's Stormlight Archive - 'The Way of Kings'.

Descent to Abu Dhabi airport started about 3:30 into the flight. Amber city lights illuminated the desert night sky. Even though it was past midnight, the plane's IFE reported an outside temperature of 35C. Disembarking from the plane, the muggy atmosphere hit me after spending four hours in an air-conditioned composite tube.

The airport was dominated by Etihad aircraft - mainly Airbus A330s, Boeing 787-8s, and Boeing 777-300ERs. There were also a number of other airlines part of the Etihad Airways Partners Alliance such as Alitalia, Jet Airways, and some Air Berlin equipment. As it was a relatively tight connection, I didn't stop to admire the birds for too long. I traversed a long terminal to reach the boarding gate for the connecting flight to Madrid.

The flight to Madrid was another Etihad operated flight, only this time on the A320's larger brethren - the A330-200. This plane was markedly older than the A320 I had been on the first leg of the trip. Fortunately, I got the port side window seat in a 2-5-2 configuration. The plane had a long take-off roll and took off a few minutes after 2am. Once we reached cruising altitude, I opened the window shade. The clear night sky was full of stars and I must have spent at least five minutes with my face glued to the window.

I tried to sleep, preparing myself for the long day ahead. Soon after waking up, the plane landed at Madrid Barajas Airport and taxied for nearly half-an-hour to reach the terminal. After clearing immigration, I picked up my suitcase and waited for a bus which would take me to my next stop - the Madrid Atocha Railway Station. Located in the heart of city, the Atocha station is one of Madrid's largest train stations and connects it to the rest of Spain. My train to Almeria was later that day - at 3:15 in the afternoon.

On reaching Atocha, I got my first good look at Madrid.

Akademy 2017

My facial expression was quite similar

Akademy 2017

I was struck by how orderly everything was, starting with the traffic. Cars gave the right of way to pedestrians. People walked on zebra crossings and cyclists stuck to the defined cycling lanes. A trivial detail, but it was a world apart from Bangalore. Shining examples of Baroque and Gothic architecture were scattered among newer establishments.

Having a few hours to kill before I had to catch my train, I roamed around Buen Retiro Park, one of Spain's largest public parks. It was a beautiful day, bright and sunny with the warmth balanced out by a light breeze.

My heavy suitcase compelled me to walk slowly which let me take in as much as I could. Retiro Park is a popular stomping ground for joggers, skaters, and cyclists alike. Despite it being 11am on a weekday, I saw plenty of people jogging though the park. After this, I trudged through some quaint cobbled neighbourhoods with narrow roads and old apartment buildings dotted with small shops on the ground floor.

Maybe it was the sleep-deprivation or dehydration after a long flight, but everything felt so surreal! I had to pinch myself a few times - to believe that I had come thousands of miles from home and was actually travelling on my own in a foreign land.

I returned to Atocha and waited for my train. By this time, I came to realise that language was going to be a problem for this trip as very few people spoke English and my Spanish was limited to a few basic phrases - notably 'No hables Espanol' and 'Buenos Dias' :P Nevertheless, the kind folks at the station helped me find the train platform.

Trains in Spain are operated by state-owned train companies. In my case, I would be travelling on a Renfe train going till Almeria. The coaches are arranged in a 2-2 seating configuration, quite similar to those in airplanes, albeit with more legroom and charging ports. The speed of these trains is comparable to fast trains in India, with a top speed of about 160km/hr. The 700km journey was scheduled to take about 7 hours. There was plenty of scenery on the way with sloping mountain ranges and deserted valleys.

Akademy 2017

Big windows encouraged sightseeing

After seven hours, I reached the Almeria railway station at 10pm. According to Google Maps, the hostel which KDE had booked for all the attendees was only 800m away - well within walking distance. However, unbeknownst to me, I started walking in the opposite direction (my phone doesn't have a compass!). This kept increasing the Google Maps ETA and only when I was 2km off track I realised something was very wrong. Fortunately, I managed to get a taxi to take me to Residencia Civitas - a university hostel where all the Akademy attendees would be staying for the week.

After checking in to Civitas, I made my way to the double room. Judging from the baggage and the shoes in the corner, someone had moved in here before I did. About half an hour later, I found out who - Rahul Yadav, a fourth year student at DTU, Delhi. Exhausted after an eventful day of travel, I switched off the lights and went to sleep.

The Conference

The next day, I got to see other the Akademy attendees over breakfast at Civitas. In all, there were about 60-70 attendees, which I was told was slightly smaller than previous years.

The conference was held at University of Almería, located a few kilometres from the hostel. KDE had hired a public bus for transport to and from the hostel for all the days of the conference. The University was a stone's throw from the Andalusian coastline. After being seated in one of the larger lecture halls, Akademy 2017 was underway.

Akademy 2017

Konqi! And KDE Neon!

The keynote talk was by Robert Kayne of Metabrainz, about the story of how MusicBrainz was formed out of the ashes of CDDB. The talk set the tone for the first day of Akademy.

The coffee break after the first two talks was much needed. I was grappling with sleep deprivation and jet lag from the last two days and needed all the caffeine and sugar I could get to keep myself going for the rest of the day. Over coffee, I caught up with some KDE developers I met at QtCon.

Throughout the day, there were a lot of good talks, notably 'A bit on functional programming', and five quick lightning talks on a variety of topics. Soon after this, it was time for my very own talk - 'An Introduction to the KIO Library'.

The audience for my talk consisted of developers with several times my experience. Much to my delight, the maintainer of the KIO Library, David Faure was in the audience as well!

Here's where I learned another thing about giving presentations - they never quite go as well as it seems to go when rehearsed alone. I ended up speaking faster than I planned to, leaving more than enough time for a QA round. Just as I was wary about, I was asked some questions about the low-level implementation of KIO which thankfully David fielded for me. I was perspiring after the presentation, and it wasn't the temperature which was the problem 😅 A thumbs up from David afterwards gave me some confidence that I had done alright.

Following this, I enjoyed David Edmundson's talk about Binding properties in QML. The next presentation I attended after this is where things ended up getting heated. Paul Brown went into detail about everything wrong with Kirigami's TechBase page. This drew some, for lack of a better word, passionate people to retaliate. Though it was it was only supposed to be a 10 minute lightning talk, the debate raged on for half-an-hour among the two schools of thought of how TechBase documentation should be written. The only thing which brought the discussion to an end was the bus for returning to Civitas leaving sharp at 8pm.

Still having a bit of energy left after the conference, I was ready to explore this Andalusian city. One thing which worked out nicely on this trip is the late sunset in Spain around this time of the year. It is as bright as day even at around 9pm and the light only starts waning at around 930pm. This gave Rahul and me plenty of time to head to the beach, which was about a 20 minute walk from the hostel.

Here, it struck me how much I loved the way of life here.

Unlike Madrid, Almeria is not a known as a tourist destination so most of the people living there were locals. In a span of about half an hour I watched how an evening unfolds in this city. As the sun started dipping below the horizon, families with kids, college couples, and high-school friends trickled from the beach to the boardwalk for dinner. A typical evening looked delightfully simple and laid-back in this peaceful city.

The boardwalk had plenty of variety on offer - from seafood to Italian cuisine. One place caught my eye, a small cafe with Doner Kebab called 'Taj Mahal'. After a couple of days of eating nothing but bland sandwiches, Rahul and I were game for anything with a hint of spice of it. As I had done in Madrid, I tried ordering Doner Kebab using a mixture of broken Spanish and improvised sign language, only to receive a reply from the owner in Hindi! It turned out that the owner of the restaurant was Pakistani and had migrated to Spain seven years ago. Rahul made a point to ask for more chilli - and the Doner kebabs we got were not lacking in spice. I had more chilli in that one kebab than I normally would have had in a week. At least it was a change from Spanish food, which I wasn't all that fond of.

Akademy 2017

View from the boardwalk

Akademy 2017

The next day was similar to the first, only a lot more fun. I spent a lot amount of time interacting with the people from KDE India. I also got to know my GSoC mentor, Boudhayan Gupta (@BaloneyGeek). The talks for this day were as good as the ones yesterday and I got to learn about Neon Docker images, the story behind KDE's Slimbook laptop, and things to look forward to in C++17/20.

The talks were wrapped up with the Akademy Awards 2017.

Akademy 2017

David Faure and Kevin Ottens

There were still 3 hours of sunlight left after the conference and not being ones to waste it, we headed straight for the beach. Boudhayan and I made a treacherous excursion out to a rocky pier covered with moss and glistening with seawater. My well-worn sandels were the only thing keeping me from slipping onto a bunch of sharply angled stones jutting out from the waves. Against my better judgement, I managed to reach the end of the pier only to see a couple of crabs take interest in us. With the tide rising and the sun falling, we couldn't afford to stay much longer so we headed back to the beach just as we came. Not long after, I couldn't help myself and I headed into the water with enthusiasm I had only knew as a child. Probably more so for BaloneyGeek though, who headed in headfirst with his three-week-old Moto G5+ in his pocket (Spoiler: the phone was irrevocably damaged from half a minute of being immersed in saltwater). In the midst of this, we found a bunch of KDE folks hanging out on the beach with several boxes of pizza and bottles of beer. Free food!

Exhausted but exhilarated, we headed back to Civitas to end another very memorable day in Almeria.

Akademy 2017

Estacion Intermodal, a hub for public transport in Almeria

With the talks completed, Akademy 2017 moved on to its second leg, which consisted more of BoFs (Birds of a Feather) and workshops.

The QML workshop organised by Anu was timely as my relationship with QML has been hot and cold. I would always go in circles with the QML Getting Started tutorials as there aren't as many examples of how to use QtQuick 2.x as there are with say, Qt Widgets. I understood how to integrate JavaScript with the QML GUI and I will probably get around to making a project with the toolkit when I get the time. Paul Brown held a BoF about writing good user documentation and deconstructed some more pretentious descriptions of software with suggestions on how to avoid falling into the same pitfalls. I sat on a few more BoFs after this, but most of the things went over my head as I wasn't contributing to the projects discussed there.

Feeling a bit weary of the beach, Rahul and I decided to explore the inner parts of the city instead. We planned to go to the Alcazaba of Almeria, a thousand-year-old fortress in the city. On the way, we found a small froyo shop and ordered a scoop with chocolate sauce and lemon sauce. Best couple of euros spent ever! I loved the tart flavour of the froyo and how it complemented the toppings with its texture.

This gastronomic digression aside, we scaled a part of the fort only to find it locked off with a massive iron door. I got the impression that the fort was rarely ever open to begin with. With darkness fast approaching, we found ourselves in a dodgy neighbourhood and we tried to get out as fast as we could without drawing too much attention to ourselves. This brought an end to my fourth night in Almeria.

Akademy 2017

View from Alcazaba

The BoFs continued throughout the 25th, the last full day of Akademy 2017. I participated in the GSoC BoF where KDE's plans for future GSoCs, SoKs, and GCIs were discussed (isn't that a lot of acronyms!). Finally, this was one topic where I could contribute to the discussion. If there was any takeaway from the discussion for GSoC aspirants, it is to start as early as you can!

I sat on some other BoFs as well, but most of the discussed topics were out of my scope. The Mycroft and VDG BoF did have some interesting exchange of ideas for future projects that I might consider working on if I get free time in the future.

Rahul was out in the city that day, so I had the evening to explore Almeria all by myself.

I fired up Google Maps to see anything of interest nearby. To the west of the hostel was a canal I hadn't seen previously so I thought it would be worth a trip. Unfortunately, because of my poor navigation skills and phone's lack of compass, I ended up circling a flyover for a while before ditching the plan. I decided to go to the beach as a reference point and explore from there.

What was supposed to be a reference point ended up becoming the destination. There was still plenty of sunlight and the water wasn't too cold. I put one toe in the water, and then a foot.

And then, I ran.

Running barefoot alone the coastline was one of the best memories I have of the trip. For once, I didn't think twice about what I was doing. It was pure liberation. I didn't feel the exertion or the pebbles pounding my feet.

Akademy 2017

Almeria's Beaches

The end of the coastline had a small fort and a dirt trail which I would've very much wanted to cycle on. After watching the sun sink into the sea, I ran till the other end of the boardwalk to find an Italian restaurant with vegetarian spinach spaghetti. Served with a piece of freshly baked bread, dinner was delicious and capped off yet another amazing day in Almeria.

Akademy 2017

Dinner time!

Day Trip

The 26th was the final day of the conference. Unlike the other days, the conference was only for half a day with the rest of the day kept aside for a day trip. I cannot comment on how the conference went on this day as I had to rush back to the hostel to retrieve my passport, which was necessary to attend the day trip.

Right around 2 in the afternoon we boarded the bus for the day trip. Our first stop was the Plataforma Solar de Almería, a solar energy research plant in Almeria. It houses some large heliostats for focussing sunlight at a point on a tower. This can be used for heating water and producing electricity.

There was another facility used for testing the tolerance of spacecraft heat shields by subjecting them to temperatures in excess of 2000C by focussing sunlight.

Akademy 2017

Heliostats focus at the top of the tower

The next stop was at the San José village. Though not too far from Almeria, the village is frequented by tourists much more than Almeria is and has a very different vibe. The village is known for its beaches, pristine clear waters, and white buildings. I was told that the village was used in the shooting of some films such as The Good, The Bad, and The Ugly.

Akademy 2017

Our final stop for the day was at the Rodalquilar Gold Mine. Lost to time, the mine had been shut down in 1966 due to the environmental hazards of using cyanide in the process to sediment gold. The mine wouldn't have looked out of place in a video-game or an action movie, and indeed, it was used in the filming of Indiana Jones and the Last Crusade. There was a short trek from the base of the mine to a trail which wrapped around a hill. After descending from the top we headed back to the hostel.

Akademy 2017

This concluded my stay in Almeria.

Madrid

After checking out of the hostel early the next morning, I caught a train to Madrid. I had a day in the city before my flight to Bangalore the next day.

I reached Atocha at about 2 in the afternoon and checked in to a hotel. I spent the entire evening exploring Madrid on foot and an electric bicycle through the BiciMAD rental service.

Photo Dump

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

Akademy 2017

The Return

My flight back home was on the following morning, on the 28 July. The first leg of the return was yet again, on an Etihad aircraft bound for Abu Dhabi. This time it was an A330-300. It was an emotional 8 hour long flight - with the memories of Akademy still fresh in my mind. To top it off, I finished EarthBound (excellent game!) during the flight.

Descent into Abu Dhabi started about half an hour before landing. This time though, I got to see the bizarre Terminal 1 dome of the Abu Dhabi airport. The Middle East has always been a mystical place for me. The prices of food and drink in the terminal were hard to stomach - 500mL of water was an outrageous 8 UAE Dirhams (₹140)! Thankfully it wasn't a very long layover, so I didn't have to spend too much.

Akademy 2017

Note to self: may cause tripping if stared at for too long

The next leg was a direct flight to Bangalore, on another Etihad A330. Compared to all the travel I had done in the last two days, the four hour flight almost felt too short. I managed to finish 'Your Lie in April' on this leg.

I had mixed emotions on landing in Bangalore - I was glad to have reached home, but a bit sad that I had to return to college in only two days.

Akademy 2017 was an amazing trip and I am very grateful to KDE for letting me present at Akademy and for giving me the means of reaching there. I hope I can make more trips like these in the future!

]]>
<![CDATA[On College Students]]>

As a school kid, I was always fascinated by meeting college students from top tier colleges. There was something surreal in the way they talked and they confidently stood with T-shirts proudly proclaiming the name of their alma mater. I used to have the impression that no matter what came

]]>
http://localhost:2368/2017/07/20/on-college-students/5b727133231bbd18297eafdaThu, 20 Jul 2017 15:00:00 GMT

As a school kid, I was always fascinated by meeting college students from top tier colleges. There was something surreal in the way they talked and they confidently stood with T-shirts proudly proclaiming the name of their alma mater. I used to have the impression that no matter what came in the future, there was a certain justification of their abilities and intelligence. Perhaps society had conditioned me to think as such. Maybe their apparent confidence was a just a projection of how I thought it would feel to be in a top tier college.

It struck me just recently that everyone in my college is of the same surreal breed I used to look upto. But now, the magic is lost on me. It's no longer the case that simply being admitted to a Tier-1 college would give one opportunities otherwise impossible. At the same time, the name of the college on your degree matters little if you haven't done enough legwork in college to justify being called an "Engineer".

]]>
<![CDATA[KIO Stash - Shipped!]]>

It's been almost a year since I finished my GSoC project for implementing discontinuous file selections as a KIOSlave.

The ioslave is now officially shipped in the KDE ExtraGear Utils software package and it can be downloaded from here: https://download.kde.org/stable/kio-stash/kio-stash-1.0.tar.xz.mirrorlist

]]>
http://localhost:2368/2017/07/04/kio-stash-shipped/5b727033231bbd18297eafd7Tue, 04 Jul 2017 06:00:00 GMTKIO Stash - Shipped!

It's been almost a year since I finished my GSoC project for implementing discontinuous file selections as a KIOSlave.

The ioslave is now officially shipped in the KDE ExtraGear Utils software package and it can be downloaded from here: https://download.kde.org/stable/kio-stash/kio-stash-1.0.tar.xz.mirrorlist

Introduction

Selecting multiple files in any file manager for copying and pasting has never been a pleasant experience, especially if the files are in a non-continuous order. Often, when selecting files using Ctrl+A or the selection tool, we find that we need to select only a subset of the required files we have selected. This leads to the unwieldy operation of removing files from our selection. Of course, the common workaround is to create a new folder and to put all the items in this folder prior to copying, but this is a very inefficient and very slow process if large files need to be copied. Moreover Ctrl+Click requires fine motor skills to not lose the entire selection of files.

This is an original project with a novel solution to this problem. My solution is to add a virtual folder in all KIO applications, where the links to files and folders can be temporarily saved for a session. The files and folders are "staged" on this virtual folder. Files can be added to this by using all the regular file management operations such as Move, Copy and Paste, or by drag and drop. Hence, complex file operations such as moving files across many devices can be made easy by staging the operation before performing it.

Project Overview

This project consists of the following modules. As there is no existing implementation in KIO for managing virtual directories, all the following modules were written completely from scratch.

KIO Slave

The KIO slave is the backbone of the project. This KIO slave is responsible for interfacing with the GUI of a KDE application and provides the methods for various operations such as copying, deleting, and renaming files. All operations on the KIO slave are applied on a virtual stash filesystem (explained below). These operations are applied through inter process communication using the Qt's D-Bus API.

The advantage of the KIO slave is that it provides a consistent experience throughout the entire KDE suite of applications. Hence, this feature would work with all KIO compatible applications.

Stash File System

The Stash File System (SFS) is used for virtually staging all the files and directories added to the ioslave. When a file is copied to the SFS, a new File Node is created to it under the folder to which it is copied. On copying a folder, a new Directory Node is created on the SFS with all the files and directories under it copied recursively as dictated by KIO. The SFS is a very important feature of the project as it allows the user to create folders and move items on the stash ioslave without touching the physical file system at all. Once a selection is curated on the ioslave, it can be seamlessly copied to the physical filesystem.

The SFS is implemented using a QHash pair of the URL as a key, containing the location of the file on the SFS and the value containing a StashNodeData object which contains all the properties (such as file name, source, children files for directories) of a given node in SFS.

Memory use of the SFS is nominal on a per file basis - each file staged on the SFS requires roughly 300 bytes of memory.

Stash Daemon

The Stash File System runs in the KDE Daemon (kded5) container process. An object of the SFS is created on startup when the daemon is initialized. The daemon responds to calls from the ioslave communicated over the session bus and creates and removes nodes in the SFS.

Installation

Make sure you have KF5 backports with all the KDE dependency libraries installed before you build!

Download the tarball to your favorite folder, extract, and run:

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr -DKDE_INSTALL_USE_QT_SYS_PATHS=TRUE ..
make
sudo make install
kdeinit5

Result

Open Dolphin and set the path as stash:/. This directory is completely 'virtual' and anything added to it will not consume any extra disk space. All basic file operations such as copy, paste, and move should work.

Folders can be created, curated, and renamed on this virtual folder itself. However, as this is a virtual directory, files cannot be created on it.

Copying from remote locations such as mtp:/ may not work however. Please report any bugs for the same on http://bugs.kde.org/.

]]>
<![CDATA[Quadcopters - A Hitchhiker's Guide to the Sky]]>

We have liftoff!

liftoff

Cue me trying to not get my fingers cut off

The quadcopter has been built! I had some problems during the building process, so this is a guide which will hopefully make things easier for people to start off with making their own quadcopters.

I suggest going

]]>
http://localhost:2368/2017/06/13/quadcopters-a-hitchhikers-guide-to-the-sky/5b7262b9231bbd18297eafc1Tue, 13 Jun 2017 04:00:00 GMTQuadcopters - A Hitchhiker's Guide to the Sky

We have liftoff!

Quadcopters - A Hitchhiker's Guide to the Sky

Cue me trying to not get my fingers cut off

The quadcopter has been built! I had some problems during the building process, so this is a guide which will hopefully make things easier for people to start off with making their own quadcopters.

I suggest going through all these websites thoroughly before purchasing a single part for your quadcopter. It's not an extensive list, so I'll keep updating it as I find more useful links:

If you live in India, the following sites are good places for shopping for parts:

RCBazaar and RCDhamaka have stores in Bangalore. Robu.in is based in Pune.

Hardware Assembly

For clarity, the components I have used in my build are:

  • 1045 Propellers x4
  • REES52 DJI F450 Frame
  • REES52 CC3D F1 Flight Computer
  • REES52 1000KV A2212 30A Brushless Motor x4
  • REES52 SimonK 30A Electronic Speed Controllers x4
  • APM 4-axis Power Distribution Board Type-B1
  • SunRobotics 2200mAh 3S 35C LiPo Battery
  • FlySky FSCT6B Computer Transmitter

Most of these parts can be picked straight off Amazon. REES52 has a good bundle for a pair of propellers, 1000 KV motor, and ESC which I would recommend buying to keep costs low.

Frame

The first thing you would want to do is start off with the frame of the quadcopter. If you go for a DJI F450 frame (or one of its hundred clones), you will get a box with four arms (technically called booms) for mounting the motors and ESCs, a base which doubles as a power distribution board, and a board to hold the top of the copter together. It's pretty simple to set up and all you need is two Allen keys to screw everything together.

I suggest labeling each boom with a number and the direction in which its motor is supposed to spin. The below diagram is a good starting point:

Quadcopters - A Hitchhiker's Guide to the Sky
Speaking about labeling, label everything. It only takes a few seconds and it can save many hours of frustration later on. Of note, is the handedness (is there a better term?) of the propeller. Left-handed propellers turn anticlockwise and right-handed propellers turn clockwise. Mounting a propeller the wrong way will cause it to produce thrust in the opposite direction.

Motors

Once you've setup the frame, the next thing you should do is take a look at the motors. These motors are brushless motors and they are very different from brushed motors. For starters they have three wires instead of the two on brushed motors. Another peculiarity about these motors is that they are out-runner motors. This means that the case of the motor rotates with the propeller and not the motor shaft alone.

Quadcopters - A Hitchhiker's Guide to the Sky
Despite being a lot more power efficient than brushed motors, these motors can generate a great deal of heat when running at full power. I would not recommend testing it at full power on the ground as there won't be any airflow to cool the motor down. This can permenantly damage your motor, and if you're unlucky, your ESC as well.

Anyway, the motors can be screwed directly into the booms of the quadcopter. Most motors come with a few accessories - an adapter for the base to mount it on to differently keyed frames and a propeller shaft with a nose tip. Don't worry if the propeller shaft seems to wobble or come loose. It's designed to only be fully secured when the propellers are mounted on the motor.

The direction of rotation is determined by the ESC, so we don't need to worry about that at the moment.

Electronic Speed Controllers (ESCs)

The ESCs link the flight computer (FC) with the motors. The input to the ESC is PWM fed over the BEC connector. Each ESC have its own 8-bit SoC to send signals to the brushless motor to turn at the desired RPM. In the FC settings, you can adjust the ESC update frequency, which is usually between 50Hz and 490Hz.

A brief digression on the BEC - the thinner wires at the power input side of the ESC. Known as the Battery Eliminator Circuit, it steps down the voltage of the battery to a comfortable 5V for powering components such as the FC or any other devices you may want to connect to the aircraft. The FC only needs power from one BEC and it may even be harmful to connect more than one BEC to the FC. There are some suggestions here on how to take care of this problem.

The ESCs regulate the amount of power going to the motor depending on throttle input and can get very hot as well. Mounting the ESCs under the booms will give it sufficient airflow to cool down properly. You want to make sure that the three wires with female bullet connectors is facing the male bullet connectors of the motor. The other side with the male connector and the BEC goes to the power distribution board of the quadcopter.

Mount the ESCs on to the booms with rubber bands, or better, zip-ties.

As with all electronics with large capacitors, ESCs can have spectacular explosions when things go wrong. Short circuits or using the wrong polarity on the power input side of the ESC can cause it to get damaged in a matter of seconds.

On the other hand, things are a lot more flexible on the power output side of the ESC. It is possible to connect the three output connectors in any order to the motor, but a good rule of thumb is to connect the middle wire of the ESC to the ground wire of the motor and to interchange the other two wires to reverse the direction of rotation.

Quadcopters - A Hitchhiker's Guide to the Sky

Power Distribution Board

The frame of your quadcopter probably will have a power distribution board of its own with solder points. However, I used a separate power distribution board with the T-type connectors for the ESCs and the battery. It also has ports for the BECs and corresponding signal wires with a single 5V DC output.

This went right under the base of the quadcopter. It's a bit of tight fit to get all the ESC wires under the base. Again, use zip-ties to secure it to the base.

The ground clearance is quite low with the power distribution board attached. You can buy a landing gear or make your own to increase this.

I made a DIY landing gear for my own quadcopter using a badminton shuttlecock tube. It absorbs shocks nicely and gives me just enough ground clearance to mount more components.

Quadcopters - A Hitchhiker's Guide to the Sky

Flight Computer and Battery

Now for the good stuff, go ahead and mount the CC3D on the top frame of the quadcopter. Most CC3D's will come with some plastic adapters and a sticky sheet to stick the CC3D on the quadcopter frame. Take note of the arrow on the CC3D and make sure it is facing the direction in which you want your quadcopter to fly forward. Make sure you leave enough room for keeping the mini USB port and the servo header pins accessible!

After binding the transmitter to the receiver, plug it into the receiver port of the CC3D. If you use the FlySky CT6B transmitter/receiver, the receiver probably won't work when directly connected to the CC3D on USB power. If so, you can use an Arduino's 5V and GND to power the receiver for testing it. When flying the quadcopter on Li-Po power, the CC3D will be powered by the ESC BECs which will give it enough headroom to power the receiver.

The receiver can be mounted pretty much anywhere as it is quite small. I mounted the receiver on my quadcopter's boom.

The final step is to add the Li-Po battery to the frame. Some prefer to mount the battery to the base of the quadcopter, but I found mounting the battery right under the top frame (under the CC3D) from E-W was much better for the quadcopter's stability. My guess is that this keeps the vertical CoG of the aircraft closer to the line of application of force, reducing the torque when it makes a maneuver.

Again, the place where you mount your battery is dependent on the size of the battery. Secure the battery with as many zip-ties as you want. There's no such thing as using too many zip-ties to secure components on quadcopters.

Putting It Together

ESC Calibration

Before you start configuring your CC3D, it's a good idea to make sure your sure your ESC and motors are working properly. For this, connect the ESC directly to the battery and the ESC's BEC straight into the 3rd channel (the throttle channel ) of your receiver. Keep the transmitter switched on with zero throttle input. On connecting the battery, you will hear a calibration beep from the motor. Put the throttle to max power and hold till you hear another calibration beep. After bringing the throttle back to zero input, you should now be able to drive the motor by varying the throttle input.

In case the motor spins the wrong way, just swap the red and yellow wires of the motor.

Cable Management

This isn't really cable management, but I couldn't think of a better title for this section 😅

Connect the ESCs' BECs to the CC3D. The order will have to be changed later when setting up the CC3D so don't worry about that for now. I looped the cables through the holes in the top of the frame so they weren't hanging off the side of the quadcopter. The white wire should face upwards.

The receiver is quite straightforward to setup. The first cable has three leads, one for signal and two for power. This will go in the Channel 1 input of the receiver. The rest of the cables have only one signal lead. Just connect this in the order to which it is connected on the the receiver port side of the leads.

The male connectors for the ESC can go right into the power distribution board.

Software Configuration

The CC3D is a nifty micro-controller. It can be flashed with different FC firmwares and can be extended with GPS and Telemetry capabilities. For now, we will work on a much more humble task - flashing it with a FC firmware.

As the CC3D has been around for a while, it has a good amount of community support in terms of open-source FC firmwares. Two popular ones are Cleanflight and LibrePilot, built out of the ashes of the now defunct OpenPilot.

The firmware gives the CC3D brains to control the aircraft and make decisions on the accelerometer and gyroscope readings. It has a sophisticated PID algorithm to do so.

Both firmwares come with cross-platform desktop apps for configuring the FC. Cleanflight uses a shiny Chrome web-app and LibrePilot has a more traditional Qt desktop app. I liked Cleanflight at first, but flashing it on the CC3D is a mess and I couldn't get it to work properly. On the other hand, LibrePilot is a joy to flash on the CC3D. The desktop app even auto-updates the CC3D's firmware if it is out of date. Therefore, being unable to flash Cleanflight, I setup the quadcopter in LibrePilot instead.

LibrePilot has a pretty easy 'Vehicle Setup Wizard' which, if followed correctly, will get your quadcopter to airworthy shape 95% of the time. You would want to keep the props off during the whole procedure, especially during ESC calibration.

Following this, you can mount the propellers by selecting the right adapter from the adapter kit each prop comes with it. Stick it in to the reverse side of the propeller and mount the propeller such that the side with the pitch and diameter printed on it is facing upwards. You can then mount the nose tip which comes with the motor with a thin screwdriver or Allen key. All the thrust of the quadcopter is generated from the propellers, so you'll want to make sure that this is quite tight. This should make the propeller shaft impossible to pull upwards, and if it isn't, you would want to return the motor before you have a catastrophic accident. The last thing you want is a propeller disloding itself from your quadcopter midflight.

Fly!

Or at least try to if you're a beginner :P If you're using LibrePilot, the default setting has stability assist which makes flying easier. For the more experienced (or gung ho) type of pilot, you can change the settings to use Rate/Acro mode to get full manual control of the aircraft.

Another thing worth looking at is tuning the PID values of the FC for better performance. However, LibrePilot's default values are good enough so this isn't necessary.

And that's it for this tutorial. I'll be writing more as I get more experienced with flying my quadcopter.

]]>
<![CDATA[The Y2038 Glitch]]>

I had written this article a bit more than a year ago for a college magazine. This topic has been done to death in a Q/A format, so if you're looking for something new about Y2038, you probably won't find it here

Time has always been a finicky thing

]]>
http://localhost:2368/2017/06/09/the-y2038-glitch/5b726258231bbd18297eafbcFri, 09 Jun 2017 10:00:00 GMT

I had written this article a bit more than a year ago for a college magazine. This topic has been done to death in a Q/A format, so if you're looking for something new about Y2038, you probably won't find it here

Time has always been a finicky thing to deal with. Our perception of time without any stimulus is limited to less than an hour. We would be severely crippled without our abundance of electronic devices synchronized by internet atomic clocks. Unfortunately, these electronic devices on which we're so reliant on are heading towards a time based computer disaster on the same scale as the Y2K bug of this millennium - known as the Y2038 problem.

How do computers measure time and what is the Y2038 bug?

All UNIX derivative systems (including your iPhone or Android smartphone) measure time by counting the number of milliseconds that have elapsed from the 00:00 UTC, 1 January 1970. This day and time is known as the UNIX epoch. While this method of measuring time has served us well for a number of years, it faces a severe limitation - the time_t int variable (an int variable is a part of a program used to hold integral values) in UNIX used to store the number of milliseconds from the epoch is only a 32-bit data type on older 32-bit computers. This means the time_t variable can only store up to 231 - 1 milliseconds before the counter overflows.

Note: 32-bit systems can hold a 64-bit int by splitting it into two words each of length 32-bits. However, the default int size on 32-bit systems is only 32-bits and a 64-bit int has to be coded separately.

So how long is it between before this overflow?

Exactly 231 - 1 milliseconds after 00:00 UTC, 1 January 1970, which is 03:14:07 UTC, 19 January 2038.

So what will happen?

The overflow in the integer value will cause time_t to reset to - (231) milliseconds. This date is 20:45 UTC, December 1901. This is actually a much more serious situation than just your computer showing a funny date. This has disastrous consequences for systems with 32-bit CPUs as BIOS software, file systems, databases, network security certificates and a wide variety embedded hardware will fail to work. With huge critical machinery such as old electrical power stations controlled by a 32-bit computer, an unmitigated catastrophe is unavoidable.

Oh no! Does that mean my laptop and smartphone will stop working as well?

Depends. The 32-bit time_t variable has been deprecated and replaced by a 64-bit time_t variable which will tide us over for the next 2 billion years. 64-bit CPUs are the standard nowadays and most computers running a 64-bit OS on a 64-bit CPU will not face any such consequences from the Y2038 bug as they use the 64-bit length time_t variable. However, smartphones have only recently shifted to a 64-bit process and 32-bit smartphones and computers will be affected if they are not coded for the 64-bit int length. That is of course, only if you are still using it after 23 years from today :P

But how did this even happen? Why couldn't we just use a 64-bit time_t variable in the first place? It's only 4 bytes bigger!

Short answer: Legacy.

Long answer: The UNIX kernel was created in AT&T and Bell labs in the 1970s. There were many competent programmers working on the project such as Dennis Ritchie of C fame. Initially, Bell engineers used a different method for calculating time, but they found that the counter would only work for 2.5 years. As UNIX was planned to be a long project, they changed the time counter to a 4-byte integer. This of course was also limited to a very finite amount of time. However in the 1970s, the consensus was that computers weren't going to stick around for that much longer and a 60-odd year window was "good enough".

Couldn't they just have just used a 64-byte int anyway?

Not really. Computer resources were scarce in the 1970s and 8-bytes of memory to store time_t was more memory than what engineers were willing to give.

Wait a second! Why doesn't time_t use the unsigned int? Surely UNIX wasn't programmed expecting us to go back in time!

The time_t variable uses the signed int to account for dates before the UNIX epoch. Dates before 1 January, 1970 are represented in a negative number of milliseconds which is why an overflow would take the computer's date to 1901.

But this is 2017! Can't we just upgrade the time_t variable to 64-bits?

Yes we can. In all new software and 64-bit operating systems, the time_t variable has been changed to 8-bytes. The problem is that all legacy software compiled with other older compilers may be incompatible for recompiling with a newer compiler for an 8-byte int value. The problem doesn't just stop there. Many embedded systems and microcontrollers (including the popular Arduino ATMega based platform) use 16-bit CPUs which simply cannot support an 8-byte integer in a 2-byte word length. Some hardware applications may use proprietary firmware which won't receive updates.

How can we fix the Y2038 problem?

There isn't any one-solution-fits-all approach for this since it affects several computer architectures, hardware, and software. While it might be easier to just upgrade all old computers with new 64-bit capable ones, there are still many areas where old 32-bit code is still prevalent. Furthermore, the cost of upgrading all the computer hardware is just too exorbitant to be covered.

If we look at the lessons learned from the past, the Y2K bug was rectified because of the media hype pressuring businesses to update their software to accommodate 4-digit years. Although the Y2038 bug is much more critical and harder to understand than the Y2K bug, there is hope that the same pressure will bring change.

But for now, only time will tell.

Credits: xkcd

]]>
<![CDATA[Quadcopters - The Beginning!]]>

I've been gravitating more towards working on electronics projects. It's a nice change from pecking the keyboard all day and it has plenty of interesting challenges of its own. The biggest difference I found is that you just can't take your hardware working correctly for granted. Things break and catch

]]>
http://localhost:2368/2017/05/26/quadcopters-the-beginning/5b72639a231bbd18297eafc7Fri, 26 May 2017 04:00:00 GMT

I've been gravitating more towards working on electronics projects. It's a nice change from pecking the keyboard all day and it has plenty of interesting challenges of its own. The biggest difference I found is that you just can't take your hardware working correctly for granted. Things break and catch fire all too easily if you're not careful with what you're doing.

My plan is to make a quadcopter from scratch (a rather loose term) and give it some ability to navigate unsupervised. This approach is (aptly?) called Simultaneous Localisation and Mapping, or SLAM. SLAM is by no means a cheap affair, be it in terms of hardware or software complexity. Using SLAM outdoors normally requires some kind of 3D localiser (such as a depth camera or LIDAR), GPS, and a fairly beefy computer to map the environment.

Of course, I'm setting my expectations low, and I will be very happy if I can get something out of this which is 10% as good as a human flying quadcopter by hand.

It all begins with the hardware and I'm going with this parts list:

  • 1045 Propellers x4
  • REES52 DJI F450 Frame
  • REES52 CC3D F1 Flight Computer
  • REES52 1000KV A2212 30A Brushless Motor x4
  • REES52 SimonK 30A Electronic Speed Controllers
  • SunRobotics 2200mAh 3S 35C LiPo Battery
  • FlySky FSCT6B Computer Transmitter

The amount of embedded systems tech in these components is astounding. Each motor is driven by an ESC, which is essentially a full-fledged 8-bit SoC. The CC3D Flight Computer is an engineering marvel in it's own right - it comes with a 32-bit/72MHz flashable microcontroller, an MPU6000 gyroscope/accelerometer for the IMU, and has expansion ports for GPS and Telemetry data. Even the battery is remarkable - it's capable of discharging 850W at peak performance.

What I'm interested in, however, is controlling the CC3D in real-time using an Arduino. Typically, the CC3D is connected to a radio receiver which is wirelessly linked to a transmitter remote which is controlled by hand. The reciever is interfaced to the CC3D by outputting a PWM to each of the 6 channels when the transmitter is operated. Each channel is usually dedicated to only one axis of movement. For example, there are four dedicated channels for throttle control, yaw, roll, and pitch.

The idea behind this is that one day, I will have a lightweight computer (anyone willing to donate an NUC?) on the quadcopter doing SLAM in real-time. The computer will send throttle, pitch, yaw, and roll commands to the Arduino. The Arduino in turn will feed the CC3D with control signals to execute the maneouver plan. It all may seem like a Rube Goldberg machine, but it's a lot easier than building my own flight computer to do the same.

To kick things off, I had to find the PWM frequency the receiver uses for controlling the ESC. To do this, I connected a pin from the output channel of the receiver to an ad-hoc Arduino Leonardo oscilloscope. Using the Processing library, we can get a 0-5V reading of the value on the A0 analog input of the Arduino.

Capture2
Not a bad start! The peaks are a pretty clear sign that it's a PWM with a low duty cycle. Using a ruler and some more code, I found that the peak to peak time is about 20ms, which works out to be a PWM frequency of 50Hz. This is in line with some websites speculating it works similarly to a servo control.

The length of time at the peak gives us the signal we need to send using the Arduino. Using some more analysis of the waveform, it looked like the signal were about 1-2ms long. A bit more research yielded that this is very similar to the waveform used for controlling servos. Fortunately, the Arduino has a Servo library meant just for this purpose. Using the writeMicroseconds() function, I got a pretty similar waveform:

Capture7
I would go as far to say that this is an even cleaner signal than what the reciever outputs. I needed a 5V->3.3V voltage divider for the output, so I used 3 identical resistors in series to do the job.

Next thing was to get the CC3D to accept the input. For this it requires manual calibration of the transmitter. In Librepilot, the procedure is quite straightforward - you need to shift the sticks through their minimum, maximum, and neutral outputs to calibrate the CC3D for your transmitter. To do this, I mounted a push button on the Arduino which would cycle through 3 states each time it is pressed - 1100uS, 1500uS, and 1900uS. Not very elegant. Then again, if it's stupid but works, it ain't stupid.

I will be building the rest of the quadcopter once the parts arrive. You can find the code for the Arduino part here: https://github.com/shortstheory/quadcopter-experiments/blob/master/tristate.ino

]]>
<![CDATA[Reading List]]>

Recently, I've been trying to read more long form literature. It's been an interesting experience to wane myself off of social media and that experience certainly deserves a post of its own soon. Until then, here are some of my thoughts on the books I finished reading this year:

The

]]>
http://localhost:2368/2017/03/12/reading-list/5b726ff8231bbd18297eafd3Sun, 12 Mar 2017 18:20:00 GMT

Recently, I've been trying to read more long form literature. It's been an interesting experience to wane myself off of social media and that experience certainly deserves a post of its own soon. Until then, here are some of my thoughts on the books I finished reading this year:

The Checklist Manifesto - Atul Gawande

The title gives away the premise of the book - which is about the art of managing and creating efficient checklists. This may not exactly sound like a jaw-clenching thriller, but Atul Gawande manages to keep the pages turning with interesting anecdotes about maintaining checklists. A surgeon by profession, Gawande lists instances about how his carefully curated surgery checklist saved lives in the operating theater. As an aviation geek, I enjoyed the story of how creating a pre-flight checklist for the Boeing 299 (now known as the Boeing B-17) saved Boeing from the brink of bankruptcy in the fiercely competitive aviation market of the 1940s. There are narratives about the flip side too. For example, verbose checklists with too many steps for the user to follow, may do more harm than having no checklist at all.

For the most part, the book reads as a collection of anecdotes of varying quality. Pointers on creating a good checklist is scattered too thin throughout the book to be useful. That said, my opinion of this book is probably not the best one to go by as I finished the book over a period of months which broke the continuity of the book.

Freakonomics: A Rogue Economist Explores the Hidden Side of Everything - Levitt and Dubner

I've seen this book practically everywhere. I've heard praises sung about it at length from friends. I've even been listening to the Freakonomics radio podcast for quite a while. But, it was only last month I picked up this book. The cover makes bold claims such as "...Explores The Hidden Side of Everything" along with some terms of endearment from Malcolm Gladwell. Coming to the content, the book covers a range of bizarre topics such as 'What do schoolteachers and sumo wrestlers have in common?' and 'Why Do Drug Dealers Still Live with Their Moms?'. Which is great, because the book makes convincing conclusions on these two topics. However, this cannot be said for some of the other chapters such as 'What Makes a Perfect Parent?' and how a child's name is detrimental to their future prospects. I feel the book doesn't spend enough time establishing how a certain cause leads to an effect in these chapters. The authors do include a pretty long list of citations for each fact at the end of the book (nearly as long as the content itself!), so their claims might be substantiated after all.

Russian Roulette - Anthony Horowitz

It took me forever to find this book. I used to be an avid fan of Alex Rider in school, but I lost track of the series in my dry spell of reading in 11th and 12th. Russian Roulette is something of a spin-off book, set somewhere in the middle of Stormbreaker in the Alex Rider timeline. The book is about all about Yassen, a key character in the Alex Rider series. Like most Anthony Horowitz books, it's not a particularly well-written book. Rather, it keeps one engaged with a fast (but somewhat predictable) plot. I liked some of the parts from Yassen's past in Russia, though the rest read like a rehash of Scorpia. Still, I finished this book much faster than I usually take to finish a book of its length.

Bird by Bird - Anne Lamott

In this book, Anne Lammot speaks about the joys and frustrations in the solitary world of a writer. From the outset, I loved this book. Lammot's writing style is hilarious and is loaded with dry wit and sarcasm. As with The Checklist Manifesto, the book follows on tangents with no central plot to speak of. On my first read of Bird by Bird, I couldn't help feeling that this book would read better as a blog than a novel. Though it of a much higher quality than a typical blog, the rant filled content would slide in perfectly for this format.

Lammot also peppers the book with some very quotable passages. I won't spoil it here, but it's a must read for anyone into the publishing industry or writing business.

]]>
<![CDATA[Blog Migration Complete!]]>

Now I finally have my very own domain name! The old Blogger site is still available if anyone wants to see it but for all future intents and purposes, this will be the place where new blog posts will be put up. While I could configure all traffic to the

]]>
http://localhost:2368/2017/02/05/blog-migration-complete/5b71d40cdd62db5a3cfa038bSun, 05 Feb 2017 18:20:00 GMT

Now I finally have my very own domain name! The old Blogger site is still available if anyone wants to see it but for all future intents and purposes, this will be the place where new blog posts will be put up. While I could configure all traffic to the Blogger site to redirect to this one, the old site has grown on me so much over the years that I feel that it would be a shame to hide it that way.

I had to migrate from Blogger because it is more than evident that Blogger has been getting less love from Google than it deserves. Take for instance, the Blogger web-based post editor - a shining example of an undeveloped relic of Google's products. Posts never autosaved or had any manual version control and a few misplaced keypresses could cause you to lose all your writing progress. Inserting images is a chore and putting up more complicated parts of text such as code blocks and sub-headings is even worse.

Of course as with most things in software development, there are workarounds for everything. The work-around which I had used for the last few blog posts was to write directly in Markdown and then convert it to HTML with pandoc to copy paste in the Blogger editor. Even so, this is clearly sub-optimal and I was spending more time wrestling with the Blogger editor to make my pages look good than I was spending writing actual content. The Blogger theme which I was using appealled to me as 13 year old (heck, it still appeals to me), but it was growing long in the tooth and only had basic support for responsive design and mobile devices. With all these things in mind, I started looking for a new home for my blog somewhere in the middle of last December on a break from college.

I first looked at WordPress. At first, it seemed like my search for a new blogging platform would stop here. Open-sourced, a great browser editor, a cohesive Android app, direct editing in Markdown, local installation for testing, and support for plugins made WordPress everything Blogger wasn't. I loved the number of themes and customisation WordPress provided. But the love was short lived, and it ended when I started researching hosting options for WordPress. Most solutions required me to rent a web-server on a monthly basis and I had no idea what tier of server to get as my blog had only recently seen a huge surge in pageviews. Not to mention, the cost of maintaining a website with such a setup was by no means cheap. This is when I started asking to my geeky friends about how they maintained their own personal websites.

The talks were very helpful, it made me realise that a dynamic blogging solution such as WordPress was overkill for a humble blog of less than 30 published posts like this one. Having a static site made so much more sense. I could write directly in Markdown, in the text editor of my choice. Not to mention, it made the workflow of writing a blog post just like I wrote code, make, git commit, and git push straight to GitHub Pages. GitHub Pages, by the way, offers completely free hosting for static sites. This means that the only thing I would need to pay for would be the custom domain name, a nominal ₹700 ($10) a year, an amount half of what I would be paying for a month of paid hosting. Plus, it would let me get my hands dirty with a bit of web development, something which I had pushed back for a long time.

I decided to start this - what I knew was going to be painful task at about 11pm on a night I just knew I wouldn't be able to sleep.

The first step was to convert all my Blogger posts to Markdown. There were some tools online but all of them messed up the conversion pretty badly. After some more digging, I ended up using Aaron Swartz's html2text Python library which did a better job than other solutions in generating some useful Markdown. I still needed to edit every generated Markdown file by hand to make it something I would be happy with using on my site. I then had to export all the images I had on my Blogger site. This lead to a few more laborious hours of saving each image on the site by hand (Right Click -> View as Image -> Save As). It did cross my mind to automate everything with a script, but it was going to take more time to automate everything and check if the automation was working than it was to do the grunt work of pulling the images. With all the resources safely on my laptop and backed on my Dropbox, I took the next step of looking at static site generators to convert my lovingly handmade Markdown files to HTML.

GitHub Pages seemed to heavily advocate Jekyll so I went with it first. With some tinkering to get the Ruby dependencies installed and posts adapted for Jekyll with the Front Matter content, I managed to get a pretty presentable blog running on localhost:4000/ at 5am on that day. With a quick push to my github.io site, I decided to call it a night and slept off a sleep-deprived session of hacking.

The next few days I played with some more Jekyll themes and found that there were many things I didn't like about it. For one thing, it was written in Ruby which I have no experience with. Themes didn't look easy to work with and there was no native support for tags (there is a workaround for this, but due to my lack of Ruby-fu, it all looked terribly arcane to me). I then put the blog migration on the back burner for a while to work on projects at my college's Automation & Robotics Club.

A few weeks later, I took a look at the blog project with some new perspective. I started by poking around for alternatives to Jekyll. There was one such alternative which ticked all my boxes - a static site generator called Pelican. As WordPress looked inherently superior to Blogger, Pelican looked inherently superior to Jekyll for what I wanted to do with it. For example, it had built-in support for tags, had a theming engine, supported Markdown and reStructuredText, and had several easy to install plugins. Above all, Pelican is written in Python which made it so much easier for me to mess around with it. There were some more modifications to make to the Markdown files (particularly with the post metadata), but it was so trivial that it didn't pain one bit to modify all the files. Not too long after I settled on Pelican, I found a theme which made my blog look exactly how I wanted it to look. The Pure Single theme also has nifty support for custom sidebar images.

There was some initial trouble with setting a blog subfolder in the site and getting images to work on some auto-generated sites (such as the Tags and Categories pages). It later turned out that it was some problem with localhost/ not finding the paths correctly to the images and the site was totally fine when published to the GitHub Pages site. After only three days of using Pelican, I had something which I was willing to show off. The next step was much more straightforward for a change - registering a domain name. I looked into a few options such as GoDaddy, Hover, and Namecheap. Namecheap had positive reviews (unlike GoDaddy) and was the cheapest of the lot. The site configuration to serve pages from GitHub's servers was not more than a 10 minute procedure, and I finally had the site you are reading this article on right now.

There will be a lot more changes coming up on this blog, some of them aesthetic and some functional. I'm also probably going to change the name of this blog sometime soon, to something which is more reflective of my current sensibilities.

]]>
<![CDATA[What I've been upto]]>

This semester has gone a little too fast for my liking. Though there were plenty of good memories and takeaways from this semester, it has been by far my most stressful time in this college yet. Between taking on a number of technical projects and trying to rescue a sharply

]]>
http://localhost:2368/2016/12/19/what-ive-been-upto/5b726fa4231bbd18297eafcfMon, 19 Dec 2016 05:39:00 GMT

This semester has gone a little too fast for my liking. Though there were plenty of good memories and takeaways from this semester, it has been by far my most stressful time in this college yet. Between taking on a number of technical projects and trying to rescue a sharply falling SG due to a poor T1, there has been very little free time this sem. Above all, this sem has shown me just how limited of a resource brain power is. Thinking about solving problems wears me out much more easily than it used to.

That's not to say what I've been doing is all drudgery. Some interesting projects that I've been working on:

GSoC 2016 Project - kio-stash

Yup, this project has been in the pipeline for months. While it (mostly) works on a clean install of KDE, it has some bugs with copying with mtp:/ device slaves and isn't very well integrated with Dolphin yet. It is in my best interest to have this shipped with KDE Frameworks as soon as possible, so I'm looking into patching Dolphin with better, more specific action support for my project.

IEEE Signal Processing Cup 2017 - Real-Time Beat Tracking Challenge

The premise of this project doesn't seem very complicated - a team just needs to submit an algorithm which can successfully detect the beats in a song in real-time on an embedded device. Heck, I had already implemented the same on an Arduino with a microphone for rather fancy lighting in my hostel room. Unfortunately, as I've learned the hard way, anything that looks simple is deceptive and this project is a perfect example. For one thing, a song by itself in the amplitude-time domain is nearly impossible to extract any data from, so it requires a (Fast) Fourier Transform to extract frequency bin amplitudes to get anything useful from it. Next, obtaining beat onset and detection requires generating a tempogram to find the BPM of a song to find the approximate beat positions. There are many interesting approaches and different algorithms to solve this problem. Fortunately, I'm not doing this alone and the seniors I'm working with have a better understanding of the more difficult mathematical components of this project, leaving me free to code in C++ and deploy it on actual hardware.

foss@bphc

This project is a society on campus I started with three programmers [far](https://github.com/Nischay- Pro/) more competent than myself. It all started in September this year when we had the two-day long Wikimedia Hackathon on campus. Though my contribution to Wikimedia was small, I got a lot out of the hackathon by meeting the people from Amrita University who came for organising the event. Listening to what Tony and Srijan had to say about how much working for foss@amrita had benefited them made me realise how much room for improvement there was in BPHC. As far as this society goes, things are still in their infancy, yet I feel there is just enough critical mass of talent and enthusiasm in open source development to push this society forward. We even have our own Constitution!

Making a personal website?

While Blogspot has been a faithful home for this blog for over half a decade, it is time to move to a modern solution for hosting. Plus setting up a website will teach me a bit about web development too, and I've needed an excuse to learn it for a while. Surely it can't be all that hard, right?

Burnout

While the slew of all these technical projects seemed like a good idea at the time, it has pushed me closer to burning out. It is the same feeling as what I felt the same time last year, albeit for different reasons.

Some of it has carried over from studying harder at the end of this sem than I had done so before. To my surprise, studying under pressure was actually more effective than it was without. In some ways, it was a throwback to studying for the number of exams nearly a couple of years ago. It has made me question whether any of my education was as much for the knowledge as it was for the desperation of avoiding failure. I still haven't figured it out, but at least I feel like I'm not as adverse to putting in the hours for getting a decent CGPA as I had been last semester.

There have been compromises in other spheres of college life as well, investing so much time in studying or doing projects has made me cut down my participation in other clubs and I've quit nearly all the non-technical clubs I joined last year. There's more than that too - I feel it's much harder to be fully immersed in anything anymore.

Maybe it's just something else which requires more soul searching.

]]>
<![CDATA[GSoC Report - Wrapping up GSoC 2016]]>

That’s it. After a combined total of 217 git commits, 6,202 lines of code added, and 4,167 lines of code deleted, GSoC 2016 is finally over.

Screenshot_20160830_014801
These twelve weeks of programming have been a very enriching experience for me and making this project has taught me a

]]>
http://localhost:2368/2016/08/30/gsoc-report-wrapping-up-gsoc-2016/5b726e90231bbd18297eafcbMon, 29 Aug 2016 20:15:00 GMTGSoC Report - Wrapping up GSoC 2016

That’s it. After a combined total of 217 git commits, 6,202 lines of code added, and 4,167 lines of code deleted, GSoC 2016 is finally over.

GSoC Report - Wrapping up GSoC 2016
These twelve weeks of programming have been a very enriching experience for me and making this project has taught me a lot about production quality software development. Little did I know that a [small project](https://github.com/shortstheory/filetray-very-early-alpha-idea- thingy) I had put together in a 6 hour session of messing around with Qt would lead to something as big as this!

There have been many memorable moments throughout my coding period for the GSoC - such as the first time I got an ioslave to install correctly, to writing its “Hello World” equivalent, and getting a basic implementation of the project up and running by doing a series of dirty hacks with Dolphin’s code. There were also times when I was so frustrated with debugging for this project, that I wanted to do nothing but smash my laptop’s progressively failing display panel with the largest hammer I could find. The great mentorship from my GSoC mentor and the premise of the GSoC program itself kept me going. This also taught me an important lesson with regards to software development - no one starting out gets it right on their first try. It feels like after a long run of not quite getting the results I wanted, the GSoC is the thing which worked out for me as everything just fell into place.

There’s a technical digression here, which you can feel free to skip through if you don’t want to get into the details of the project.

Following up from the previous blog post, with the core features of the application complete, I had moved on to unit testing my project. For this project, unit testing involved writing test cases for each and every component of the application to find and fix any bugs. Despite the innocuous name, unit testing this project was a much bigger challenge than I expected. As for one thing, the ioslave in my project is merely a controller in the MVC system of the virtual Stash File System, the Dolphin file manager, and the KIO Slave itself. Besides, most of the ioslave’s functions have a void return type, so feeding the slave’s functions’ arguments to get an output for checking was not an option either.

This led me to use an approach, which my mentor aptly called “black box testing”.

In this approach, one writes unit tests testing for a specific action and then checking for whether the effects of the said action are as expected. In this case, the ioslave was tested by giving it a test file and then apply some of the ioslave’s functions such as copy, rename, delete, and stat. From there, through a bunch of QVERIFY calls is to check whether the ioslave has completed the operation successfully. Needless to say, this approach is far more convoluted to write unit tests for as it required checking each and every test file for its properties in every test case. Fortunately, the QTestLib API is pretty well documented so it wasn’t difficult to get started with writing unit tests. I also had a template of what a good test suite should look like thanks to David Faure’s excellent work on implementing automated unit testing for the Trash ioslave. With these two tools in hand, I started off with writing unit tests shortly before the second year of college started.

As expected, writing black box unit tests was a PITA in its own right. The first time I ran my unit test I came up with a dismal score of 6 unit tests passed out of the 17 I had written. This lead me to go back and check whether my unit tests were testing correctly at all. It turned out that I had made so many mistakes with writing the unit tests that an entire rewrite of the test suite wasn’t unwarranted.

With a rewrite of the test cases completed, I ran the test suite again. The results were a bit better - 13 out of the 17 test cases passed, but 4 failed test cases - enough reason for the project to be unshippable. Looking into the issue a bit deeper, I found out that all the D-Bus calls to my ioslave for copy and move operations were not working correctly! Given that I had spent so much time on making sure the ioslave was robust enough, this was a mixed surprise. Finally, after a week of rewriting and to an extent, refactoring the rename and copy functions of the ioslave, I got the best terminal output I ever wanted from this project.

GSoC Report - Wrapping up GSoC 2016

Definitely the highest point of the GSoC for me. From there on out, it was a matter of putting the code on the slow burner for cleaning up any leftover debug statements and for writing documentation for other obscure sections. With a net total of nearly 2000 lines of code, it far surpasses any other project I’ve done in terms of size and quality of code written.

At some points in the project, I felt that the stipend was far too generous, for many people working on KDE voluntarily produce projects much larger thann mine. In the end, I feel the best way to repay the generosity is to continue with open source development - just as the GSoC intended. Prior to the GSoC, open source was simply an interesting concept to me, but contributing a couple of thousands of lines of code to an open source codebase has made me realise just how powerful open source is. There were no restrictions on when I had to work (usually my productivity was at its peak at outlandish late night hours), on the machine I used for coding (a trusty IdeaPad, replaced with a much nicer ThinkPad), or on the place where I felt most comfortable coding from (a toss up between my much used study table or the living room). In many ways, working from home was probably the best environment I could ask for when it came to working on this project. Hacking on an open source project gave me a sense of gratification solving a problem in competitive programming never could have.

The Google Summer of Code may be over, but my journey with open source development has just begun. Here’s to even bigger and better projects in the future!

]]>
<![CDATA[GSoC Update: Tinkering with KIO]]>

I'm a lot closer to finishing the project now. Thanks to some great support from my GSoC mentor, my project has turned out better than what I had written about in my proposal! Working together, we've made a lot of changes to the project.

For starters, we've changed the name

]]>
http://localhost:2368/2016/07/21/gsoc-update-tinkering-with-kio/5b7261fc231bbd18297eafb8Thu, 21 Jul 2016 09:48:00 GMTGSoC Update: Tinkering with KIO

I'm a lot closer to finishing the project now. Thanks to some great support from my GSoC mentor, my project has turned out better than what I had written about in my proposal! Working together, we've made a lot of changes to the project.

For starters, we've changed the name of the ioslave from "File Tray" to "staging" to "stash". I wasn't a big fan of the name change, but I see the utility in shaving off a couple of characters in the name of what I hope will be a widely used feature.

Secondly, the ioslave is now completely independent from Dolphin, or any KIO application for that matter. This means it works exactly the same way across the entire suite of KIO apps. Given that at one point we were planning to make the ioslave fully functional only with Dolphin, this is a major plus point for the project.

Next, the backend for storing stashed files and folders has undergone a complete overhaul. The first iteration of the project stored files and folders by saving the URLs of stashed items in a QList in a custom "stash" daemon running on top of kded5. Although this was a neat little solution which worked well for most intents and purposes, it had some disadvantages. For one, you couldn't delete and move files around on the ioslave without affecting the source because they were all linked to their original directories. Moreover, with the way 'mkdir' works in KIO, this solution would never work without each application being specially configured to use the ioslave which would entail a lot of groundwork laying out QDBus calls to the stash daemon. With these problems looming large, somewhere around the midterm evaluation week, I got a message from my mentor about ramping up the project using a "StashFileSystem", a virtual file system in Qt that he had written just for this project.

The virtual file system is a clever way to approach this - as it solved both of the problems with the previous approach right off the bat - mkdir could be mapped to virtual directory and now making volatile edits to folders is possible without touching the source directory. It did have its drawbacks too - as it needed to stage every file in the source directory, it would require a lot more memory than the previous approach. Plus, it would still be at the whims of kded5 if a contained process went bad and crashed the daemon.

Nevertheless, the benefits in this case far outweighed the potential cons and I got to implementing it in my ioslave and stash daemon. Using this virtual file system also meant remapping all the SlaveBase functions to corresponding calls to the stash daemon which was a complete rewrite of my code. For instance, my GitHub log for the week of implementing the virtual file system showed a sombre 449++/419--. This isn't to say it wasn't productive though - to my surprise the virtual file system actually worked better than I hoped it would! Memory utilisation is low at a nominal ~300 bytes per stashed file and the performance in my manual testing has been looking pretty good.

With the ioslave and other modules of the application largely completed, the current phase of the project involves integrating the feature neatly with Dolphin and for writing a couple of unit tests along the way. I'm looking forward to a good finish with this project.

You can find the source for it here: https://github.com/KDE/kio-stash (did I mention it's now hosted on a [KDE repo](https://quickgit.kde.org/?p=kio- stash.git)? ;) )

]]>