Android 4.4 introduced ART, or Android Runtime. The big difference between this runtime and Dalvik is speed, and lots of it! KitKat already aims to be more efficient on devices with less RAM and these days, the Galaxy Nexus qualifies. Before starting out with the Cyanogenmod 11 nightlies I was using an app to set oom_adj in an effort to keep my work email from getting terminated whenever I opened Firefox on my device. With 4.4, this is no longer necessary and coupled with ART it feels like I have a brand new phone again.
For example, scrolling and switching between apps is noticeably smoother. Application startup is faster. Apps react faster and more fluidly than before. The battery seems to be lasting longer too, although I had already made great strides there by disabling Exchange services. Using the Samsung 2100mAH battery, the phone is generally around 50% battery with plenty of use by the end of the day. This is with my day starting at 4:30am and ending by 8:30pm to 10:00pm. Considering this thing used to gulp down a 3500mAH battery in less than a day, I call that progress.
Big thanks to CM for continuing to support the Galaxy Nexus with new releases. 10.3 was already performing nicely, but 11 is really great on memory constrained devices!
The Nexus 4 is revealed and out in November. Quad core, twice as much RAM as my Nexus. Want! But I’m not going to be buying it, of course. It’ll be interesting to see how long it takes for my phone to get Android 4.2.
Speaking of upgrades, Android 4.1.2 was released and promptly downloaded to my Nexus. Impressions are that it seems to keep more app memory free than 4.1.1. Changelog shows a lot of fixes and minor changes. Upgrade was trouble-free.
The longest I ever kept a phone was four years, an old Motorola W755 flip-phone. A truly basic phone, it was built like a tank. Eventually I opted to upgrade the device…thereby beginning the era of “shit that doesn’t work quite right.” One phone constantly took pictures of my pocket. Another wouldn’t work with the bluetooth in my car. I kept going back to the old W755, until the battery began to quit. So I kept using the pocket-picture phone, until work decided to change their cell phone policy. Now I wouldn’t need to carry a work issued Blackberry *and* a personal phone: I could simply configure my personal device to receive work email and be reimbursed. I wound up finagling it such that I ported my number to Google Voice, and obtained a SIM card on the work plan — so now I have no cellular bill at all.
The phone I opted for, after flirting around with getting a really cheap phone, was the Samsung Galaxy Nexus. After installing a 3850mAh battery, I can’t find anything that I dislike about this device. It’s fast, running the latest Android OS, works with my car stereo, and doesn’t take pictures when it shouldn’t. It’s SIM unlocked, giving me my choice of carriers should I need to switch later. It has NFC — I don’t expect to use this for a while, but if the technology catches on I’ll be ready. It has also turned me into a phone junkie and I’m always lurking on Android forums and Engadget. But then I started to think…could I make *this* phone last four years? Hell, how long can a smartphone be kept up-to-date and usable with the latest software?
So I am going to see just how long I can keep my Nexus. It’s a good candidate for longevity, given that it’s a “Google” phone running pure Android, and it’s popular with Android developers. The hardware itself isn’t that old, but it is lacking in LTE. Until Nexus ‘next’ comes out, it’s pretty much a given that my phone will receive Android updates quickly. I need to establish some ground rules for when my phone is to be retired, so here goes.
The phone is considered DEAD when:
* Any hardware fails outside of warranty which cannot be fixed for $150 or less.
* The following apps cannot be kept at their latest version: Google+, Maps, Navigator, Drive, Facebook, primary banking, security token, work email.
* The phone cannot be upgraded to the latest major Android release within two months, either stock or custom (I’ll note when stock OTA upgrades are no longer available).
* Data becomes unusable on my carrier’s network (e.g., 3G is retired. But really, 2G is still hanging around…).
* I can’t maintain a full day’s moderate usage without charging on a new 3700+mAh battery — including “I can’t find a new battery.”
I bought the phone August 8. So now…we wait a while.
The old file server was a beast. 2 2.4GHz dual-core Opterons, the old 90nm chips. 5GB (don’t ask) of RAM. And…2.7TB of storage, with a big pile of 250GB and 500GB drives mashed together into several arrays that were dumped into a volume group. The whole thing was encrypted just for kicks. There was also an Nvidia 7800GT board hooked up to two wide-screen displays, a PCHDTV card, a USB hub…
All told, it consumed 280W of power at idle. That’s with the CPUs sitting at 1GHz each, video card in low-power mode, etc. Was it fast? Very. But I never really got around to using the machine as intended since it had to stay up all the time: it’s a file server. The torrents run on it, mp3s stream off it, videos and data are there, etc. The idea was to use it as a fileserver, and with the leftover capacity — which, let’s be honest, there is way too much — would be used to drive a 3-display Flightgear setup. The stable release of Debian is great for a fileserver. Not so much if you want to run the latest iteration of a game under heavy development. Inevitably, dependencies crop up which are not available in stable. So that never got off the ground (see what I did there?) and it was just sucking down a lot of power for no real reason.
I replaced it….with this.
It’s a Sun Blade 100 with 1GB of RAM and the drive cage from kos-mos sitting atop it. It’s ugly but well thought out. It uses 90W of power, and that’s with 3 2TB drives, 2 250GB drives used to back up things, and the little 20GB IDE drives the system boots off of. With the CPU at 500MHz, it’s not going to break any speed records. The important thing was that it have ECC memory (it does), and is fast enough to stream over the wireless setup I have at full speed (it is) while using as little power as possible. It runs Debian, naturally, and aside from a few issues using the onboard sungem NIC — replaced with a generic rtl8139 board — it has been issue free under quite heavy loads. All the PCI slots are full, there are two SATA cards and the NIC installed.
This system is just fast enough. There are two easily reachable limits: the PCI bus, which is 33MHz and 32-bits wide only; and the CPU’s capacity to Do Stuff. The bus, with my configuration, seems to tap out at a bit shy of 25MB/sec of actual throughput, roughly judged by watching the speed of the RAID-1 resync on the two 250GB drives. At the same time, the RAID-5 XOR operations pretty much max out the CPU at about 20-ishMB/sec (missing the SSE/MMX optimized routines here!). Throw in software and hardware interrupts and additional traffic from the NIC…and it’s just right. It handles multiple download streams, torrents, aMule, etc without a hiccup. It stays responsive when the arrays are doing monthly parity checks. It streams HD video with all this going on. But a gigabit NIC would be pointless, as would a SATA-II controller — I tried the latter and observed identical performance to the old SATA-I cards. The additional overhead from encryption was also out of the question, but that’s fine…it was more of a ‘because I can’ thing with kos-mos. It’s not a laptop.
So here’s to hopefully years of faithful service, and thanks to the Debian team for such a solid SPARC port! And the eventual creation of the Flightgear box…
The other day, I wiped the media PC which was running Windows 7 and installed Ubuntu 10.10. I was tired of everything “just working” and not having to screw around with anything just to watch a Flash video or 1080p movies.
I went with the AMD64 release because I feel like less of a man running a 32-bit OS on 64-bit hardware. Unlike when you put 64-bit Windows on a box, with Linux typically this means your entire userland is 64-bit — all applications including, notably, the web browser. This wouldn’t be a problem except for the Adobe Flash plugin. There is now a 64-bit ‘preview’…again…since it obviously takes close to a decade to get it ready for release. This actually works decently until you try to watch video full-screen. Luxury, right? Wait what’s this?? There’s a 10.2 release candidate! Oh, gee, they only support 32-bit Linux. There’s a shocker.
The irony here is that the 32-bit release-candidate plugin runs faster with nspluginwrapper than the ‘native’ 64-bit plugin. Can actually watch videos fullscreen, etc, mostly glitch free.
But I digress. The biggest change for me since the last time I use plain ol’ Debian unstable on the media PC is that Ubuntu has switched to Pulseaudio as its default audio framework. The last time I encountered this, it couldn’t even play MP3s without weird pauses, static, skipping, and high latency. There were still things to contend with for me here, today, but it’s a lot better than the last time.
- Glitchy audio, latency, etc. This is pretty much gone. The login sound stuttered all over the place, but I disable that anyway and everything else works well. Simultaneous MP3 player, flash, and VLC works nicely.
- IEC958 output works. GIANT CAVEAT: for stereo output. This is still a huge improvement from before, when I was unable to get it to work at all without resorting to ALSA directly.
- VLC/mplayer ac3/dts passthrough…sorta works. I’ve switched to VLC for my media player and it works with audio output set to ALSA, “Use S/PDIF when available” is checked, and the default non-IEC958 device is selected. It automagically sends ac3/dts unmodified if there is no other sound application using Pulseaudio. Otherwise, it degrades to stereo and plays fine that way. More on this behavior later.
- Mixing multiple apps is trouble free. This is sort of “the point” to Pulseaudio. ALSA can do this too but Pulseaudio is apparently more flexible. For example, applications that support it receive individual volume controls.
- Configuring sound cards is no longer black magic…if it actually worked right for me. It shows me all the outputs and speaker configurations my card supports, but none aside from stereo and possibly 4.0 surround worked properly. This will be fantastic once it becomes more mature. ALSA can be very unfriendly if the default settings do not work for you.
The marriage of ALSA and Pulseaudio seems a lot more harmonious than it was before. ALSA is still your bare-metal interface to the sound card. However, by default in Ubuntu 10.10 it is tied to the Pulseaudio daemons. The applications all send their sound to the daemon which does its thing, including mixing if necessary, before sending it onto ALSA where it hopefully becomes the expected sound from your speakers. Selecting ALSA as an output option in programs which support it, such as VLC and Audacious, no longer results in a message stating that the sound device could not be opened (except for the passthrough note above) and it gets routed through Pulseaudio anyway.
It’s not perfect. Would I prefer straight ALSA? At this point, probably…but I’m used to dealing with it. For a regular user I believe this new configuration is much nicer for most applications, particularly for plain ol’ users that don’t want to dig around on the command line to get sound to work the way that they want. I plan to leave it in place on the media PC and see how it progresses. I look forward to the day when surround sound works properly on my setup — as I only use the S/PDIF for output this is lower priority for me. I’m less optimistic about the ac3/dts passthrough…but who knows? The rest of my machines running straight Debian will continue to use ALSA for the foreseeable future but I was pleasantly surprised with Pulseaudio this time around on Ubuntu.
As an aside, in the course of trying to get passthrough to work I did consider getting rid of Pulseaudio. Here is the most concise, least intrusive guide I could find to disabling it instead of removing it: https://help.ubuntu.com/community/UbuntuStudioPreparation#Disabling%20PulseAudio
After I gave up on the Make Home Affordable (ha!) program, I pretty much gave up on refinancing entirely. I guess one benefit of the awful economy, though, is that interest rates remained low and actually continued to drop substantially. When the spread between what I was paying and what was available went over two points…I could no longer ignore it.
The first task was getting quotes. I didn’t want to pay for points, and it was important that closing costs were very reasonable. I didn’t want to refinance at all unless cash flow improved markedly and the savings paid for the costs of obtaining the loan within a year. What I did was use Zillow’s rate quote tool. It gave me better rates than the banks I checked, and it was easy to understand what the costs were going to be. I picked the quote that gave me the best 5 year cost with the lowest closing costs and set the process in motion.
I hate to spend money. But here, I thought, is a no-brainer. I’m dropping my rate more than two points. I’m saving almost $300(!) per month. It’ll pay for itself in less than a year and really help with cash flow. In this market, screw equity…as long as I don’t increase my debt I want more money to stay in my pocket.
So the house needs to be appraised. Anyone who has refinanced traditionally lately probably knows where this is headed already! I prep the house, clean from top to bottom. A string trimmer and leaf blower are purchased so the back yard can be navigated without a machete. Tons of work, money spent, and the guy is in and out in like ten minutes. Ok. Now we wait.
Naturally the appraisal comes in later than promised. There was also an unpleasant surprise within: it came in low. Unreasonably low in my opinion, as it amounted to a 17% drop. Foreclosures and short sales are the new normal though, so it is what it is. This brought the loan-to-value (LTV) to 93%.
This caused a number of things to happen. First, the interest rate. I had been able to float the rate down to 4%. With the low appraisal, that was no longer ‘free’ and I would pay $500 to save about $10 per month. No thanks. So back up it goes to 4.125%. Next, mortgage insurance (PMI). This jumped from $25 on the old mortgage, to $45 anticipating a higher appraisal, to $87 with the new appraisal. Ok, still saving lots of money. A couple of days go by, and the insurer declines the loan for, in the broker’s words, ‘no apparent reason.’ Another company has to be used, and the PMI is now $113. Per month.
Now I’m going to vent a little. My PMI is over five times what it is on the old note, and I am financing the same house with less money and better credit. The reason is really nothing to do with me – it’s the appraisal. Included in the comparisons was a foreclosed property and a house down the street listed for a surprisingly low amount. Two days after the appraisal another house on the street went up for sale, smaller than mine, listed for what I figured my appraisal would come in at.
Still saving over $200 per month, which is incredible considering the PMI increase. I’m just miffed that a low-ball appraisal is basically costing me $80 a month.
Using a pair of ISA IDE controllers for a mirror, and an ISA NIC is a recipe for sloooooow.
md2 : active raid1 sda3 sdb3
425600 blocks [2/1] [_U]
[====>................] recovery = 24.3% (104064/425600) finish=10.5min speed=508K/sec
Nearly completed, anyway. The way it wound up is the old WRT54GL with DD-WRT still holds the internet connection. The WRT610N is bridged to it over LAN, and both the 2.4GHz and 5GHz networks are in use there. Final result is three wireless APs set up. The G network contains devices that only want to use the internet, like my roommate’s stuff, my work phone, etc. The 2.4GHz N network supports the living room media such as the PC and consoles. Finally, the 5GHz N network contains the stuff in the basement including the file server.
In the long run, it cost more than I expected to get everything the way I wanted. However, I’m really pleased with the throughput I’m getting on all the various devices. For example, from the basement to the second floor is now 7-8MB/sec! With the use of wireless bridges I was able to avoid putting wireless adapters in everything, so the machines still connect over plain old ethernet.
The only thing I have left to do really, besides cleaning up all the cabling I hastily laid out, is to reflash the WRT610N with DD-WRT once the warranty expires in June. Having freed up the powerline adapters, I’m thinking of doing some VLANs for certain equipment and setting up an old PC in the garage for use when I’m working on the cars. I’m surprised how frequently I need to look something up, which means I have to wash my hands, make sure my shoes aren’t dirty, etc, etc, before heading back inside to take a look.
Almost forgot to mention this. Obviously, US Bank never got back to me about a Home Affordable Refinance. Funny, I’m pretty sure that if I missed a payment I wouldn’t even have to ask them to call me! Anyway, with rates low again I looked into a conventional refinance. I obtained a few quotes, none of them made me want to seal the deal.
First, it seems impossible to drop private mortgage insurance (PMI) because the value of the house has probably decreased. This is actually what makes the refinance really questionable. I can drop my rate nearly 2 points, but mortgage insurance premiums have gone up drastically since I financed the new house in August 2008.
Second, if you have PMI, you can’t drop escrows. That was something else I wanted to be rid of since it’s always off, and I have that nagging doubt of whether or not taxes and insurance bills will be paid on time. I would rather pay for it myself.
Third, if you don’t want to pay points, the rates are higher. This is obvious of course, but many advertised rates are with points. It was harder than it should be to find rates that really didn’t have points, or high initiation fees, etc.
Put all of this together and it worked out to basically $5000-$6000 out of pocket (or tacked onto the loan) to save $100 a month or less on the payment. Even assuming the cost turned out to be only $4000, it would take 40 months to recoup the expense. Not worth it. Yeah, if I knew for a fact I would be in the house until it was paid off, or ten years or whatever, maybe I would do it. But for me, money in the bank now is worth more than that money in the future.
I wish the Home Affordable thing would have worked out. I have seen that these cost between $2000-$3000 when all is said and done, in which case it would definitely make more sense to go ahead and refinance. Thanks for nothing, US Bank. Good to see my timely payments are reciprocated.
After deciding I was probably never actually going to get around to running CAT-5 in the new house, at least not in the near term, it was time to do something about the abysmal transfer rates offered by the powerline networking I had been using. This uses the electrical wiring in your home to transmit data. There are similar systems for phoneline and coaxial cable.
Powerline networking suggests speeds “up to 85 megabit”, some promise even higher speeds today. I imagine that it’s possible to get decent performance in a small home or between outlets in the same room, or with quiet circuits. With the setup I had, the breaker box in the basement becomes the “switch” for the powerline network. I had three access points: one upstairs in the computer room with the router and desktop/gaming machine, one downstairs in the living room for the media PC and consoles, and one in the basement for the servers. Between any two points I never saw speeds over 10 megabit, and it was usually 4-5 megabit, even lower from living room to router. Connectivity to the basement was always dependable since it was right underneath the breaker box. The living room PC, however, frequently lost connection to the internet which could only be cured by going upstairs, unplugging the powerline adapter, and plugging it back in.
More issues were caused by other electrical devices in the house. Mainly lighting, but also cell phone chargers, using the microwave, etc. If I turn on the porch light, for example, ping goes from 10ms to bouncing between 30-200+ms with packet loss from the living room to the router! Certain cell phone chargers plugged into a particular outlet in the computer room caused connectivity issues. Turning on the desktop fluorescent light would wreck transfer speed to the basement. Etc, etc, etc.
Without running CAT-5 in the walls, the only other solution is really wireless. I have a WRT54GL running DD-WRT upstairs. With my laptop I noted I still had a good signal in the basement. Rather than buying a bunch of wireless cards, I purchased a Wireless N “gaming adapter”, commonly used to connect XBOX 360 consoles to wireless since Microsoft’s proprietary adapter is ridiculously expensive, like most other 360 accessories. After configuring it for my network, I plugged it into the switch downstairs and just like that, the machines are on the wireless.
Transfer speed is much improved from upstairs to the basement, fast enough to stream HD video. Removing the third powerline adapter better than doubled transfer speed from the living room to the upstairs router as well. I’m getting 2.5-2.8MB/s, which is way faster…and that’s just wireless G, WPA2, down two floors and on the other side of the house.
Next is a rework of the wireless network. I ordered a WRT610N to replace the upstairs router. That should dramatically increase the speed of the adapter in the basement. I plan to use the old router as a wireless bridge in the living room to avoid purchasing another gaming adapter and provide service to wireless G enabled stuff, to keep the 610N wireless N only.