Pages

Sabtu, 12 April 2014

Hurt Locker P2P Lawsuits Move On To Canada

By now, US torrent users are used to the nagging worry that a copyright holder could seek damages against them. Now these mass lawsuits appear to be making the journey to Canada, where Voltage pictures is seeking the identities of users they claim have pirated the film Hurt Locker. Major ISPs have been subpoenaed, but the number of defendants is not yet available.

Back in early 2010, Voltage Pictures, the makers of the Oscar-winning film Hurt Locker, began suing to uncover the people behind thousands of IP addresses. Voltage Pictures contends that these IPs were spotted downloading the film via torrents. In most cases, judges granted the subpoenas and ISPs had to divulge customer details. Instead of suing outright, Voltage had its legal counsel extract settlements from the defendants.

Now the same scheme is playing out up north. A court in Montreal gave the thumbs up to Voltage and its legal counsel two weeks ago. Today is the deadline for three Canadian ISPs to hand over the records. It’s unclear if the same settlement shakedown is going to happen in Canada, but we have to imagine it will.
Read More..

Jumat, 11 April 2014

extroots


the extroots is a mechanism to transfer our instalation to external device like flashdrive / hardisk external. its very important, because basicly the mr3020 doesnt have enough space in memory to install thing such luci (web interface). after flashing the kernel we should access the mr3020 via telnet / putty. Be sure to understand the basic of linux command line before you do this.


how to install extroots


this is my 16gb sandisk cruzher. i have formatted it to ext2 with mini tool partition wizard home edition.
I format the flashdisk to 8gb fat32
(to save data whenever i want plug in to a windows operating system), 100mb linux swap,  adding some additional virtual ram. And the rest space of the flashdisk to ext-2 format.


extroots tp link mr3020


now proceed to our devices...  plug in our usb flash drives.

install requirement packet by typing opkg install [packet_name]
  • block-mount
  • kmod-usb-storage
  • kmod-fs-ext4





now proceed to edit the fstab configuration...

vi /etc/config/fstab



config global automount
        option from_fstab 1
        option anon_mount 1

config global autoswap
        option from_fstab 1
        option anon_swap 0

config mount
        option target   /home
        option device   /dev/sda1
        option fstype   ext4
        option options  rw,sync
        option enabled  0
        option enabled_fsck 0

config swap
        option device   /dev/sda2         -> this is our 100mb swap that we already created
        option enabled  1                    -> enabled it


config mount

option device /dev/sda3             -> this is our ext2 formated usb that we already created
option target /mnt/usb                -> i mount it on /mnt/usb
option enabled_fsck 0            
option options rw,sync              -> mount with read and write + sync
option enabled 1                       -> enabled it
option is_rootfs 1                    -> to make extroots 

after that press esc, then :wq to save and rewrite fstab configuration.



now restart your tplink mr3020.... either by plug off power source or type reboot in the console.


login into console....

type df


it will show

rootfs                 6927587      8194   6561947   0% /
/dev/root                 2880      2824        56  98% /rom
tmpfs                    14580        80     14500   1% /tmp
tmpfs                      512         0       512   0% /dev
/dev/sda3              6927587      8194   6561947   0% /overlay
overlayfs:/overlay     6927587      8194   6561947   0% /                 ->  
before we edit fstab there isnt    there








the next step will  be

tar -C /overlay -cvf - . | tar -C /mnt/usb/ -xf -

to pivot whole /overlay to the usb storage which  already being mount from /dev/sda3 to /mnt/usb.




done. now all our further instalation will move on our storage...




Read More..

ATmega328 is no longer blank it loaded with Arduino boot loader

Blank ATmega328 flushed with Arduino boot loader

Flushed ATmega circuit built and testing with sketches (This  board is named as  "pixelduino") :-)

My recent test with Arduino is flushing an ATmega328 with arduino boot loader and its successful. I have an Arduino UNO SMD version and its loaded Arduino ISP sketch, to make it as an ISP programmer.
Arduino UNO connected to Blank ATmega328 as an ISP programmer. After  flushing its loaded with blink sketch for test and it works well.
So now i can avoid the Arduino board in some of my final projects. 
Read More..

Kamis, 10 April 2014

Meet 蓝博文(Bowen Na

From Bay Area

As of last night at 10:27pm, I am a father. XiaoQin and I picked the name because it works both in Mandarin and English (using the Pinyin Anglicization). The labor was a 18 hour process, but resulted in a C-section because the umbilical cord was wrapped around little Bowens neck, and this was the safest way to get the baby out. Mother and Baby are recovering and bonding just fine.

Ive posted more than one picture on Facebook, but this sleep-deprived dad is unable to get Two Factor Authentication to work on PicasaWebs plugin from Lightroom, so Google+ users will just have to live with this one picture posted to an account without Two Factor Auth turned on.

Given this state of affairs will persist for the foreseeable future, and I dont see myself spamming my blog with baby pictures, just ask me on Facebook to be added to the Facebook group devoted entirely to baby-spam.
Read More..

Rabu, 09 April 2014

Nvidia announces next generation 64 bit Tegra K1 SoC with 192 GPU cores

Nvidia announces next-generation 64-bit Tegra K1 SoC with 192 GPU cores

Nvidia unveiled the next-generation version of its Tegra system-on-a-chip tonight at its CES press conference. The new Tegra K1 has two important selling points. The first is that it uses a GPU with 192 CUDA cores based on Nvidias Kepler GPU architecture, the same used in the desktop GeForce GT 600 and 700-series GPUs. Secondly, some versions of the chip will be the first to ship with Nvidias custom "Denver" ARM CPU, a 64-bit architecture that supports the ARMv8 instruction set.




Combining its desktop and mobile GPU architectures has been on Nvidias roadmap for some time now, as we saw at the companys GPU Technology Conference in March of 2013. The difference is that now we have some idea of just how powerful that GPU will be: at 192 CUDA cores, the Tegra K1 has roughly the same raw processing horsepower as a GeForce GT 630 or 635, a low-end dedicated GPU from early last year. Memory bandwidth and throttling will also affect performance, but this gives us a decent idea of where Tegra K1 is relative to Nvidias desktop cards.

Nvidia CEO Jen-Hsun Huang spent a fair chunk of time talking about the benefits of using the same GPU architecture across both PCs and mobile devices. Since the Tegra K1 supports the same API levels and hardware features as a full GeForce GPU, game and middleware developers will theoretically have an easier time porting their engines from desktops and game consoles to phones and tablets. Nvidias current Tegra GPUs dont support newer APIs like OpenGL ES 3.0, so support for the full version of OpenGL 4.4 is a nice leap forward.

The CPU gets more complicated. There will be two different, pin-compatible versions of the Tegra K1 that are differentiated by their CPUs. One will use four ARM Cortex A15 cores (plus one power-saving "shadow" core) running at up to 2.3GHz. Thats not much different from the CPU configuration used in the current Tegra 4. Only the higher-end version will use the new 64-bit Denver architecture, in a dual-core configuration running at up to 2.5GHz.

Other details about the chip, including the manufacturing process, werent discussed. In one slide comparing the K1s performance to that of the Xbox 360 and PlayStation 3, Huang noted that the Tegra K1 used just five watts of power, but its not clear under what conditions we can expect that kind of power draw.

The Tegra K1 is the latest ARM CPU architecture to go 64-bit, but its not the first: Apples 64-bit A7 is already shipping in the latest iPhones and iPads, and Qualcomm will be bringing the 64-bit ARM Cortex A53 architecture to market in the mid-range Snapdragon 410. In the server room, AMD plans to bring its first ARM-based Opteron chips to market this year, which will be based on the 64-bit Cortex A57 architecture. 2014 is poised to be the year when 64-bit goes mainstream in ARM devices.

Huang didnt mention when either version of K1 would be available in shipping devices, but AnandTech reports that the A15 version will ship in the first half of this year and the Denver version will ship in the second half. Its a fair bet that well learn more at this years GPU Technology Conference in March.
Read More..

Selasa, 08 April 2014

Measuring RAM Speed

Memory RAM Speed - Access Time, Megahertz (MHz), Bytes Per Second

Prior to SDRAM, speed was expressed in terms of nanoseconds (ns). This measured the amount of time it takes the module to deliver a data request. Therefore, the lower the nanosecond speed, the faster. Typical speeds were 90, 80, 70 and 60ns. Older 486 machines may have 80 or 90. More recent Pentiums will have 60 or 70.

Computer Ram Speed - Access Time, Megahertz (MHz), Bytes Per Second









MHz Speed
Total Clock Cycles per Second
Divide by 1 billions to get nanoseconds per clock speed.
66
66,000,000
15
100
100,000,000
10
133
133,000,000
8

Often, the last digit of a memory part number will represents the speed such as -6 = 60ns.

SDRAM speed is measured in megahertz (MHz). Speed markings on the memory chips may still specific nanoseconds, but in this case in represents the number of nanoseconds between clock cycles. To add to the confusion the markings on the chips dont match the Mhz value. Here is a conversion chart.

To calculate bytes per second you need to know the Bus Width and Bus Speed of your PC. The first thing to remember is 8-bits = 1 byte. If you have a 64-bit bus, than 8 bytes of information can be transferred at one time. (64 / 8 bits = 8 bytes)

If your bus speed is 100Mhz (100 million clock cycles per second) and the bus width is 1 byte wide, the speed is 100 MBs per second. With a 64-bit width, the speed is 800 MBs per second (64 / 8 * 100,000,000)

Rambus modules are measured in megabytes per second. Rambus modules are either 400 or 300Mhz. Because they send two pieces of information every clock cycle, you get 800 or 600Mhz. They have a 16-bit bus width or 2 bytes (16/8). The 400Mhz module speed is 1600MB a second or 1.6GB a second. (400,000,000 * 2) * 2. The 300Mhz module provides 1.2GBs a second.

Read More..

Senin, 07 April 2014

Microsoft Wheel Optical Mouse

Microsoft Wheel Optical MouseCable - Optical - 3 x Button Programmable - 1 x Scroll Wheel - USB, PS/2

Price: $19.95


Click here to buy from Amazon

Read More..

Minggu, 06 April 2014

Long Term Review Schlage Keypad Locks

Its been 3 years and a little bit since I installed the Schlage Keypad Locks in the house, and sure enough, the battery on the front door has gone out, which reminded me both to change it and that I needed to write a followup review.

Changing the batteries turned out to be fairly straightforward. You unscrew the back, and the cover pops off. The battery is located in a bracket, and is a standard 9V battery which are fairly cheap to get from Amazon. When removing the cover, make a note of the orientation and make sure that you have the handle in the correct orientation when replacing. Otherwise it just wont go back in. Obviously, I use the front door of the house a lot more than the back door, so the battery for the back door is still going strong.

As for the product, I like it so much that I replaced the rental units lock with a Schlage unit, my parents house also now sports one, and my wifes house also has them. I cannot recommend them highly enough, especially if you own rental property --- no more re-keying your unit between tenants, and even better, your tenant will never call you up in the middle of the night after theyve locked themselves out, because they cant. Its also great if youre in the habit of exchanging your home with someone else on HomeExchange, or renting out your home on AirBnB. You set up a code, give them to your exchangees or renters, and delete the code when you get back. You can also set up specific codes for house-cleaners, etc and other trusted personnel and delete those if you ever switch providers.

Home ownership is in general a pain, but being able to replace the standard keyed locks with one of these is definitely a bright spot. Highly recommended.

Read More..

Sabtu, 05 April 2014

Microsoft Reveals SQL Server 2012 Licensing Model News

Microsoft Reveals SQL Server 2012 Licensing Model
Microsoft Reveals SQL Server 2012 Licensing Model Licensing costs for SQL Server 2012 wont substantially change except for Client Access Licensing (CAL), which will be higher.Microsoft unveiled a new licensing and pricing model for its upcoming SQL Server 2012 product family, which is expected in the first half of next year. The new licensing model is based on an organizations computing power, number of users and use of virtualization. Licensing costs wont substantially change compared with SQL Server 2008 R2, except for Client Access Licensing (CAL) costs, which will be about 25 percent higher.Microsoft SQL Server 2012(formerly code-named "Denali") promises self-service business intelligence features and other new capabilities when commercially launched. However, organizations still have to figure out complicated licensing considerations and costs. Microsoft attempted to kick-start that effort by publishing its "SQL Server 2012 Licensing Datasheet" document last week, which can be downloaded here.
The company expects to release SQL Server 2012 in the first half of next year. Rob Horwitz, research chair at the Directions on Microsoft independent consultancy, thinks the product may appear sometime in the second quarter.Edition Changes SQL Server 2012 will be available in three editions: Enterprise, Business Intelligence and Standard. The Enterprise edition is an all-inclusive product in terms of its features, and Microsoft is positioning it for "mission critical applications and large scale data warehousing" uses. The Business Intelligence edition is a new product offering. It adds BI features while also including all of the features in the Standard edition. Microsoft recommends the Standard edition for "basic database, reporting and analytics capabilities," according to its white paper.Microsoft rolled much of the SQL Server 2008 R2 Datacenter edition licensing rights into the SQL Server 2012 Enterprise edition, so the old Datacenter edition will disappear as a top product-line offering. Microsoft will offer a Web edition of SQL Server 2012, but only to organizations signing a Service Provider License Agreement. Developer, Express and Compact editions will still be available after the SQL Server 2012 product is released, Microsoft indicated.Licensing Changes The biggest licensing change for SQL Server 2012 is Microsofts shift from counting processors to counting cores (see table). The licensing describes four cores per physical processor as being the minimum licensing basis.Microsoft Reveals SQL Server 2012 Licensing Model
SQL Server 2012 Licensing Options. "*Requires CALs, which are sold separately." Those organizations using virtualization with SQL Server 2012 have two licensing options. Organizations can license virtual machines based on core licenses or they can license virtual machines based on server plus CALs. Four cores per virtual machine is the minimum requirement on licensing. Maximum virtualization (that is, no limits on the number of virtual machines) is only available only with the Enterprise edition of SQL Server 2012, with Software Assurance being required.Licensing Costs The licensing costs stayed the same, decreased or increased. It all depends on how you look at it. Horwitz shared his views in an e-mail, where he laid out the changes in bullet points."The price of the SQL Server CAL does go up, about 25%. "The per-server license for Standard Edition remains the same price as before. "The per-server license for BI server is the same price as the server license for SQL Server 2008 R2 Enterprise…though this isnt an apples to apples comparison given the difference in SKU features. "The per-core price for SQL 2012 Standard and Enterprise edition is one quarter the price of per-proc licenses for equivalent editions of SQL 2008 R2. So effectively, if you have more than 4 cores per physical processor in the server, your licensing fee goes up." Paul DeGroot, another Microsoft software licensing expert who now serves as principal consultant of the independent consultancy Pica Communications after working for Directions on Microsoft, offered other insights into Microsoft SQL Server 2012 licensing costs. DeGroot noted that the CAL price increased substantially from $164 to $209 and speculated that Microsoft felt that raising the price of the CALs would have less of an impact on customers than raising server licensing costs. Still, other price changes were somewhat neutral, he contended."Overall, Id say they [the prices] stayed the same or went down, with the reservation that the change from per proc to per core is significant, but may not have a huge impact on a lot of customers, since quad-core procs are probably a common choice for running high-end editions of SQL Server," DeGroot said in an e-mail. He estimated that the price would remain much the same for organizations "so as long as youre using quad-core procs." Cost considerations largely killed the Datacenter edition of SQL Server 2008 R2, DeGroot contended. "That cost $54,990 per proc, or twice the per proc price of SQL 2008 R2 Enterprise," DeGroot said, adding that "reading between the lines, Id say that SQL Server 2008 R2 Datacenter sold poorly, and thats not surprising." With SQL Server 2012 Enterprise edition "customers will get Datacenter power at half the price that Datacenter was," he explained.
Read More..

USB UNIVERSAL SERIAL BUS



Just about any computer that you buy today comes with one or more Universal Serial Bus connectors on the back. These USB connectors let you attach everything from mice to printers to your computer quickly and easily. The operating system supports USB as well, so the installation of the device drivers is quick and easy, too. Compared to other ways of connecting devices to your computer (including parallel ports, serial ports and special cards that you install inside the computers case), USB devices are incredibly simple!

In this article, we will look at USB ports from both a user and a technical standpoint. You will learn why the USB system is so flexible and how it is able to support so many devices so easily -- its truly an amazing system!

Anyone who has been around computers for more than two or three years knows the problem that the Universal Serial Bus is trying to solve -- in the past, connecting devices to computers has been a real headache!

  • Printers connected to parallel printer ports, and most computers only came with one. Things like Zip drives, which need a high-speed connection into the computer, would use the parallel port as well, often with limited success and not much speed.
  • Modems used the serial port, but so did some printers and a variety of odd things like Palm Pilots and digital cameras. Most computers have at most two serial ports, and they are very slow in most cases.
  • Devices that needed faster connections came with their own cards, which had to fit in a card slot inside the computers case. Unfortunately, the number of card slots is limited and you needed a Ph.D. to install the software for some of the cards.

The goal of USB is to end all of these headaches. The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127 devices to a computer.

Just about every peripheral made now comes in a USB version. A sample list of USB devices that you can buy today includes:

  • Printers
  • Scanners
  • Mice
  • Joysticks
  • Flight yokes
  • Digital cameras
  • Webcams
  • Scientific data acquisition devices
  • Modems
  • Speakers
  • Telephones
  • Video phones
  • Storage devices such as Zip drives
  • Network connections

In the next section, well look at the USB cables and connectors that allow your computer to communicate with these devices.

USB Hubs

Most computers that you buy today come with one or two USB sockets. With so many USB devices on the market today, you easily run out of sockets very quickly. For example, on the computer that I am typing on right now, I have a USB printer, a USB scanner, a USB Webcam and a USB network connection. My computer has only one USB connector on it, so the obvious question is, "How do you hook up all the devices?"

The easy solution to the problem is to buy an inexpensive USB hub. The USB standard supports up to 127 devices, and USB hubs are a part of the standard.

USB Cables and Connectors

Connecting a USB device to a computer is simple,we can find the USB connector on the back of your machine and plug the USB connector into it.















The rectangular socket is a typical USB socket on the back of a PC.

If it is a new device, the operating system auto-detects it and asks for the driver disk. If the device has already been installed, the computer activates it and starts talking to it. USB devices can be connected and disconnected at any time.



A typical USB connector, called an "A" connection



A typical USB four-port hub accepts 4 "A" connections.

A hub typically has four new ports, but may have many more. You plug the hub into your computer, and then plug your devices (or other hubs) into the hub. By chaining hubs together, you can build up dozens of available USB ports on a single computer.

Hubs can be powered or unpowered. As you will see on the next page, the USB standard allows for devices to draw their power from their USB connection. Obviously, a high-power device like a printer or scanner will have its own power supply, but low-power devices like mice and digital cameras get their power from the bus in order to simplify them. The power (up to 500 milliamps at 5 volts) comes from the computer. If you have lots of self-powered devices (like printers and scanners), then your hub does not need to be powered -- none of the devices connecting to the hub needs additional power, so the computer can handle it. If you have lots of unpowered devices like mice and cameras, you probably need a powered hub. The hub has its own transformer and it supplies power to the bus so that the devices do not overload the computers supply.

The USB Process

When the host powers up, it queries all of the devices connected to the bus and assigns each one an address. This process is called enumeration -- devices are also enumerated when they connect to the bus. The host also finds out from each device what type of data transfer it wishes to perform:

  • Interrupt - A device like a mouse or a keyboard, which will be sending very little data, would choose the interrupt mode.
  • Bulk - A device like a printer, which receives data in one big packet, uses the bulk transfer mode. A block of data is sent to the printer (in 64-byte chunks) and verified to make sure it is correct.
  • Isochronous - A streaming device (such as speakers) uses the isochronous mode. Data streams between the device and the host in real-time, and there is no error correction.

The host can also send commands or query parameters with control packets.

As devices are enumerated, the host is keeping track of the total bandwidth that all of the isochronous and interrupt devices are requesting. They can consume up to 90 percent of the 480 Mbps of bandwidth that is available. After 90 percent is used up, the host denies access to any other isochronous or interrupt devices. Control packets and packets for bulk transfers use any bandwidth left over (at least 10 percent).

The Universal Serial Bus divides the available bandwidth into frames, and the host controls the frames. Frames contain 1,500 bytes, and a new frame starts every millisecond. During a frame, isochronous and interrupt devices get a slot so they are guaranteed the bandwidth they need. Bulk and control transfers use whatever space is left. The technical links at the end of the article contain lots of detail if you would like to learn more.

USB Features

The Universal Serial Bus has the following features:

  • The computer acts as the host.
  • Up to 127 devices can connect to the host, either directly or by way of USB hubs.
  • Individual USB cables can run as long as 5 meters; with hubs, devices can be up to 30 meters (six cables worth) away from the host.
  • With USB 2.,the bus has a maximum data rate of 480 megabits per second.
  • A USB cable has two wires for power (+5 volts and ground) and a twisted pair of wires to carry the data.
  • On the power wires, the computer can supply up to 500 milliamps of power at 5 volts.
  • Low-power devices (such as mice) can draw their power directly from the bus. High-power devices (such as printers) have their own power supplies and draw minimal power from the bus. Hubs can have their own power supplies to provide power to devices connected to the hub.
  • USB devices are hot-swappable, meaning you can plug them into the bus and unplug them any time.
  • Many USB devices can be put to sleep by the host computer when the computer enters a power-saving mode.
  • The devices connected to a USB port rely on the USB cable to carry power and data.






Inside a USB cable: There are two wires for power -- +5 volts (red) and ground (brown) -- and a twisted pair (yellow and blue) of wires to carry the data. The cable is also shielded.

USB 2.0

The standard for USB version 2.0 was released in April 2000 and serves as an upgrade for USB 1.1.

USB 2.0 (High-speed USB) provides additional bandwidth for multimedia and storage applications and has a data transmission speed 40 times faster than USB 1.1. To allow a smooth transition for both consumers and manufacturers, USB 2.0 has full forward and backward compatibility with original USB devices and works with cables and connectors made for original USB, too.

Supporting three speed modes (1.5, 12 and 480 megabits per second), USB 2.0 supports low-bandwidth devices such as keyboards and mice, as well as high-bandwidth ones like high-resolution Webcams, scanners, printers and high-capacity storage systems. The deployment of USB 2.0 has allowed PC industry leaders to forge ahead with the development of next-generation PC peripherals to complement existing high-performance PCs. The transmission speed of USB 2.0 also facilitates the development of next-generation PCs and applications. In addition to improving functionality and encouraging innovation, USB 2.0 increases the productivity of user applications and allows the user to run multiple PC applications at once or several high-performance peripherals simultaneously.

Read More..

Jumat, 04 April 2014

Review Ghost Spin

I was all set to buy Ghost Spin, after enjoying Moriartys previous books Spin State and Spin Control, but the Amazon reviews put me off, so I waited for the library copy. I shouldnt have waited, because the reviews are wrong and Ghost Spin is one of the best novels Ive read all year.

It picks up after Spin State and Spin Control, but is a far more ambitious novel. The themes in this novel include the nature of identity (Are you your memories? Are you still you, if you can be replicated multiple times but the different versions of you have different experiences?), the nature of love and consciousness, as well as how we would treat AIs if emergent AIs truly did exist.

The novel starts with Catherine Lis AI husband, Cohen, committing suicide deliberately. His remains are (in accordance with AI traditions) are immediately auctioned off. As his widow, Catherine sets off immediately to try to recover and reconstruct her husband, but the path to doing so is filled with obstacles and she ends up scatter-casting herself through human space as well.

What makes the novel work for a computer scientist is the references scattered throughout the novel that are accurate and interesting. Moriarty clearly does her homework: references to Ada Lovelace, Alan Turing, and Lewis Carroll are all well made and taken within context. Her extrapolation on how an emergent AI would work, and how an AI could die or evolve is fascinating and interesting. For instance, something that no other AI-oriented novels ever cover is the fact that if your memory is perfect, and you were unable to truly forget, wouldnt that drive you crazy? Her characters are also worthy of being cared about, even though some of them do do despicable things. One of the main characters in the book (Captain Llewellyn) ends up having to share his brain/body with an AI, and the exploration of the themes emerge most thoroughly with the conversations he has with himself.

Where the novel fails is in plotting. I really liked the book for the first 20 minutes after putting it down, but then realized that the plot didnt make a lot of sense in retrospect. For Cohen to commit suicide doesnt make sense to me, even at the end of the novel. The big reveals in the novel, however, are very fair --- you get plenty of foreshadowing and all the clues needed to put together the reveal yourself.

This novel is not an action-packed one, especially in comparison with Spin State. A lot of the book just composes of conversations characters have between themselves or even with themselves. And the novel does have the one obvious failure. But the writing, the milieu, and the thorough exploration of fascinating AI themes are more than enough to let me overlook the failure. If youre a computer scientist who enjoys fiction this could very much be the perfect novel for you. If not, then be prepared to get a massive info dump and not quite enough context to understand fully whats going on.

Highly recommended.
Read More..

Kamis, 03 April 2014

Review Orange Internet Max France

As a very cheap person and proud of it, I rarely run my Nexus One in data mode when Im at home: Im usually within wifi range, and I refuse to pay the exorbitant $3/day or $35/month prices that local US providers charge me for. However, when traveling, I value data plans highly and would be willing to pay that price even if asked.

Last year, I had trouble getting even regular voice SIM cards, let alone Internet capable SIM cards. This year, however, we started our trip in Paris, albeit on a weekend. On a Monday, however, I went to an Orange store and got an prepaid SIM card. It cost EUR 9.95. I bought a 10 EUR refill right away so I could subscribe to the Internet Max plan (which was 9 EUR, but the Sim card only came with 5 EUR credit, and the minimum refill was 10 EUR). Its an unlimited data subscription plan thats good for a month and automatically turns off if you dont have enough credit to resubscribe! The worst part of the experience is that part where Orange tries to pretend to be Apple. You walk into the store, and are greeted by a pretty woman dressed in Orange uniform, who will put your name in a queue (driven by an iPad) so you can browse the store until a customer service rep is ready to talk to you. Unfortunately, they did this Apple-emulation strategy wrong: they had too many pretty women, and not enough customer service rep, so I ended up cooling my heels for at least 25 minutes before being able to complete an incredibly simple transaction. I would have preferred standing in line like at a normal store.

What an awesome plan it is. Most of the time, the speed is fine. Much faster the the iPhone 4G that I got as part of the home exchange program we participated in. And of course, any Android phone runs circles around the iPhone as a matter of practicality. Being able to get turn by turn navigation saved our bacon several times while driving (or walking!) around France. We were also able to tether the phone to the laptop whenever we were at a hotel without internet. Try this with your post-paid plan in the USA for less than $25/month!

The best part about this is that while Orange will try to charge you separately for e-mail, if youre using an Android phone, theres no need to pay for the e-mail plan separately. Thats because the Gmail app on Android uses http requests, so it looks like browser traffic to Orange, rather than IMAP/POP, which is what Apple products use.

As an aside, after using an iPhone side by side with a 2 year old Nexus One running Android 2.3 (no I havent bothered to upgrade the default OS yet, and probably wont --- Im cheap with my time as well as money), its no contest. Id rather have a 2 year old Android phone than an iPhone when Im in a foreign country and in need of navigation, search, and making phone calls.

Recommended. An Orange store should be the first thing you look for when you land in France.
Read More..

Rabu, 02 April 2014

Dead Island Riptide secured the top of the UK charts

Dead Island: Riptide, the newest zombie game from Deep Silver, secured the top of the UK charts on the second week.

Injustice: Gods Among Us reached the number two, Tomb Raider in third, FIFA 13 in fourth, and Dragons Dogma: Dark Arisen in fifth.

The new entry in the charts was Soul Sacrifice, a RPG games for PS Vita.

The list of Top 20 UK chart for the week:

1. Dead Island: Riptide
2. Injustice: Gods Among Us
3. Tomb Raider
4. FIFA 13
5. Dragons Dogma: Dark Arisen
6. Call of Duty: Black Ops II
7. BioShock Infinite
8. LEGO City Undercover: The Chase Begins
9. Luigis Mansion 2
10. Star Trek
11. Assassins Creed III
12. Far Cry 3
13. Defiance
14. The Elder Scrolls V: Skryim
15. LEGO Batman 2: DC Super Heroes
16. God of War: Ascension
17. Need for Speed: Most Wanted
18. Sonic & All-Stars Racing Transformed
19. Grand Theft Auto: Episodes From Liberty City
20. Grand Theft Auto IV

source: gamespot
Read More..

Selasa, 01 April 2014

Adobe’s controversial decision to dump its Flash plugin for mobiles

Adobe’s controversial decision to dump its Flash plugin for mobiles
Adobe’s controversial decision to dump its Flash plugin for mobiles
Web developers are angry with Adobe.

Emotions are running high in the world of professional web design and development, with words such as “shambles” and “betrayal” in common use. This anger is directed at Adobe’s controversial decision to dump its Flash plugin for mobiles. Fellow RWC columnist Kevin Partner summed up these feelings in a recent blog post:

“The irony is that it isn’t Steve Jobs’ famous hatred of Flash that has caused this turnaround – the true villain of the piece is Adobe itself. By abandoning development of Flash for mobile, it eliminates Flash as an option for most websites... Farewell Adobe. Delete.”

After years as a Flash-based developer, Kevin will save himself a lot of money by ditching it for the open standards HTML, CSS and JavaScript. This is serious stuff: designers and developers who built careers around Flash have had their skills rendered worthless, and livelihoods are on the line. Unlike Adobe, these folk still believe in Flash.

Certainly, Flash can be abused in irritating banner ads; it annoyingly asks to be updated twice a week; and (like JavaScript) can crash the machine if poorly coded. However, the widespread prejudice that it’s an unnecessary hindrance to the smooth running of a browser-based web is wrong. Flash has enriched the web enormously with vector graphics, bitmaps, audio, video, interactivity, communications, programmability and now even 3D – today’s web would be unrecognisably poorer without it, and so will that of tomorrow.

The fact that Flash delivers all of this functionality via a browser plugin is actually its greatest strength. Adobe’s Allan Padgett discovered how to make Acrobat Reader render PDFs directly inside Netscape, showed this to Jim Clark and the Netscape Plugin Application Programming Interface (NPAPI) was born, enabling browsers to reserve onscreen space for content rendered by any compliant plugin.

Soon all browsers supported the cross-platform NPAPI, and player-based delivery became the norm: add a new function to the plugin and it’s immediately available to all web users regardless of CPU, OS and browser (and even backwards-compatible with the oldest NPAPI-compliant browsers). Contrast this with the glacial pace of HTML/CSS/JavaScript development, where designers deploy any new capability only after the slowest browsers and users finally caught up.

Ironically, the biggest beneficiary of this plugin revolution wasn’t Adobe, with its ability to render PDFs in the browser, but Macromedia, which could render Shockwave Flash (SWF). The reason that isn’t widely understood: NPAPI doesn’t only support rendering chunks of static content, but can also stream content through a persistent connection.

Macromedia broke free from HTML’s static page-based handling and brought the web to life with streamed content and, better still, such content was automatically protected because it was rendered on the fly and couldn’t be saved. Flash became central to professional web design and the natural extension for HTML. Usage exploded and the Flash player became the one plugin everyone installed. With penetration approaching 100%, developers could assume its presence – almost unnoticed it became “the world’s most pervasive software platform”, with greater reach than any individual browser or operating system.

Macromedia realised that it owned the universal online runtime, and decided to bring to the web the sort of interactive computing experience you could only then get on a desktop PC. In 2002, it released a white paper by Jeremy Allaire that floated the idea of the Rich Internet Application (RIA). Flash would no longer merely extend HTML pages by embedding multimedia content: the player would become the “rich client” supporting standalone, browser-hosted applications that enabled users to do stuff, not just see stuff.

A RIA could be anything from flipping an online magazine page to a virtual shopping mall, from videoconferencing to word processing. This up-shift from add-on to rich client was a major undertaking, and extending the web into a ubiquitous computing platform would step on some important and sensitive toes. Macromedia needed far more serious backing and, after failed talks with Microsoft, in 2005 it was acquired by long-standing rival Adobe.

Adobe extended Flash’s capabilities into its Creative Suite, the open source Flex framework and Flash Builder IDE. In 2008 it launched the Adobe Integrated Runtime (AIR), which made it possible to run Flash-based RIAs offline on a desktop PC.

Nothing but the web

After Adobe, the company most interested in making the web into a universal computing platform was Google. The ability to work well with plugins with improved security, standardised rendering and separate execution was central to its own Chrome browser – features that Google made available to other browser developers via its Pepper Plugin API (PPAPI). It even merged the Flash player directly into the Chrome runtime. The web itself was going to become everyone’s computing platform, and current heavyweight operating systems such as Windows and Mac OS would effectively become redundant; with data and applications handled in the cloud, the OS was needed only for loading the browser and the rich client.

In late 2009, Google announced plans for Chrome OS – a stripped-down, web-only, cloud-focused operating system aimed at netbooks, desktop PCs and a new class of handheld, touchscreen devices called tablets. By the time Google finally released its promised “Chromebook” in June 2011, response was muted. The advantages were clear enough: low-cost and maintenance, fast boot-up, security, and ubiquitous access to your cloud-based content that was easy to share with collaborators and was automatically backed up. The problems were just as clear: who in their right mind would choose ugly-looking and underpowered web applications over smooth, fast native desktop apps?

But this is to miss the significance of that rich client: if I were to use a Chromebook, I wouldn’t use an HTML-based application such as Google Docs; I’d use a Flash-based RIA such as Adobe’s own free Acrobat.com suite (Buzzword word processor, Presentation graphics, Table spreadsheet, Forms Central and ConnectNow web conferencing). Acrobat.com is far from perfect, but it demonstrates the potential for streaming advanced applications. But what about applications with serious data and processing requirements? Surely you can’t do something such as an online Photoshop in Flash? Well, yes and no: check out the Photoshop Express Editor from Photoshop.com (it’s free) and you’ll find it surprisingly powerful for consumer-level photo editing. I can readily imagine a future version becoming Pro capable.

Never forget that your local device doesn’t have to do all the processing; all the heavy lifting can be handled remotely, and all the rich client need do is to stream onscreen activity from that server via a live connection. Do you really believe that your PC can render a 3D animation faster than Google’s, or Adobe’s, or any other cloud-based provider’s server farm? This idea of Software-as-a-Service (SaaS), where application providers carefully balance server-based number crunching against rich client-side rendering was the long-term dream of the rich cloud. Even the most demanding supercomputer applications are thinkable as RIAs.

The web needn’t be a lowest-common-denominator experience, which is the case with current HTML and JavaScript-based web apps. The fundamental difference between a thin client and a rich client is that the latter promises the best of both worlds: server-based processing power delivered via a lightweight, design-rich front-end. It makes as much sense to share multimedia data, applications and processing power via the rich cloud as it did to share paged static information via HTML.

This revolution looked unstoppable, because another great advantage of plugin-based computing is that it’s immune to sabotage by competitors. When Netscape and Java first raised the possibility of the web as a platform, Microsoft famously responded with Internet Explorer and its strategy of “embrace, extend and extinguish”. That wasn’t possible with Flash because Microsoft couldn’t do anything to break Adobe’s self-contained, universal plugin – if it couldn’t stop Flash, its only option was to provide a superior alternative.

The result was Silverlight, a beautiful system built around the truly rich (and open!) mark-up language XAML, which enabled Microsoft’s army of Windows desktop developers to seamlessly translate their .NET programming skills into the development of powerful RIAs for cross-platform, browser-based delivery. Forget about Acrobat.com and imagine a cloud-based, Silverlight-delivered version of Office, and SaaS versions of every other Windows application. Silverlight revealed another strength of plugin-based delivery: truly open competition. Content producers could choose between Flash or Silverlight, according to their respective design and development strengths, but their end users didn’t need to make any choices at all, since both plugins (and any others) could happily co-exist.

With two competing rich clients, the stage was set for the web to reach its full potential, and Google, Adobe and Microsoft were all fully signed up to this vision. So were all the other major OEMs such as RIM, Samsung and HTC, via the industry-wide Open Screen Project that committed members to support Adobe’s new Flash mobile player and AIR with all their future mobile devices...

The death of the rich client

One man had other plans. Steve Jobs wasn’t ready to see his mobile devices turn into vehicles for rich cross-platform computing, capable of supplying the same sort of rich content and applications as his own native iOS apps and, hence, depriving his platform of its USP and exclusivity (and his App Store of its 30% margin). In the longer term, why would Jobs want the open web to turn into a rich and open cloud? What was in it for Apple?

Unlike Microsoft, Apple makes computer hardware, so what would be the future for Apple if the cross-platform web became the computing platform of the future? Steve Jobs’ visceral hatred of Flash wasn’t at all because it was “yesterday’s technology”, but precisely because it was “tomorrow’s technology” – and it could destroy his empire. Having watched his first lucrative walled garden demolished by Adobe – when it made its Mac-only publishing and graphics applications available under Windows – Steve Jobs knew that he needed to act fast.

When Jobs posted his carefully crafted “Thoughts on Flash”, and made it clear that the iOS platform and the iPad were never going to support plugins, it must have felt like a sweet revenge. At a stroke he killed off Google’s chances of turning the Chromebook into a serious competitor, and rendered worthless all the money and effort Microsoft had put into developing its cross-platform Silverlight. I can’t help imagining that, as he hit Send, he too may have been saying “Farewell Adobe. Delete.”

But isn’t this getting a teeny-bit paranoid? After all, it’s the man’s right not to support plugins if he doesn’t want to. Why personalise things this way? More to the point, if the future was so much brighter with Flash and Silverlight, why didn’t those other companies just stick with their vision, out-compete Apple and let the free market decide the matter? When even Adobe gave up so quickly, surely that demonstrates that Steve Jobs was right?

I still think of my PC as belonging to me rather than to Steve Ballmer, and I think that all interested parties should be able to – and be encouraged to – make software to run on it, and that it should then be up to me to decide which of that software I run. Flash-blocking by individuals shows freedom of choice in action, but a blanket ban by a device manufacturer shows the exact opposite. This must be one of the most extraordinarily anti-competitive acts in history.

As for the argument about “letting the market decide”, I wish it were that simple. I did hold out hopes that the lack of Flash support may hit Apple commercially, and that it would be forced to rejoin the cross-platform consensus on which the web was built. I even think Adobe might have continued with the mobile Flash player if Microsoft had announced support for both it and Silverlight within Metro. However, I can also understand how Microsoft looked at Apple’s business model and realised that killing off both rich clients in Metro and opening its own app store made far more financial sense.

What about Android? Why didn’t Adobe keep the faith there and at least keep the Flash flame burning? It’s this betrayal that has made developers such as Kevin Partner so angry – but sadly, I think that Adobe was right. When it comes down to it, the web is universal or it’s nothing. If Apple unilaterally decided to drop JPEG support, we’d all have to shift over to GIFs. Now that it’s dropped support for SWF, we have to shift to HTML5 and SVG and the video formats, where we can. Ignoring iOS simply isn’t an option. Killing off Flash in the browser is the last thing Adobe wanted to do, but it’s right to recognise the inevitable, and responsible to take the lead.

The universal cross-platform web plugin model had many strengths, but Jobs uncovered its hidden and catastrophic flaw: as soon as one platform refuses to support it, the plugin and its functionality is immediately rendered useless for everyone. This idea of not supporting any plugins should have been unthinkable, was unthinkable; but once he’d thought of it, Jobs knew that it couldn’t fail.

I recognise that Steve Jobs did far more in his life than ban Flash, but my anger and admiration is indeed personal. There was only one man on the planet who could possibly imagine taking on the rich cloud dream and wrestle the web genie back into the bottle. There was certainly only one man who could pull it off and still be revered as a saint – imagine what would be happening now if Steve Ballmer had announced Microsoft was unilaterally ceasing to support a web technology used by more than two million designers and developers, by more than 80 of the top 100 sites, and more than 99% of end users.

What’s happened has happened: Apple has won and the professional web community has to face the new reality. So is it farewell to Adobe?

There’s no doubt that many thousands of developers, such as Kevin, will decide that now is the time to move on. Ultimately, though, the mission of making both the web and computing experiences as rich, powerful and personal as possible will continue. Without mobile delivery, Flash SWF has limited long-term prospects in the browser, and the cross-platform future belongs to HTML5 online, and AIR offline. Adobe is still best placed to provide the tools (and to use SWF to provide continuity and backwards-compatibility). Adobe may have been mugged and hospitalised, but AIR and HTML5 are a new exciting territory in which I believe the company will survive and thrive.

What has died is a beautifully simple, all-purpose, universal rich web format for extending what HTML pages can do. With the death of both Flash and Silverlight in the browser, we’ve also lost a live, direct and untaxed connection between content producer and content consumer. Why should we pay 30% to Apple, Microsoft and Google for native apps that would actually run far more efficiently in our browser?

The biggest loss of all is that long-term dream of supercomputer cloud power delivered directly to cheap, secure, simple and personal rich clients. We may have been given “the web in our hands”, but we’ve been deprived of the rich cloud.

Read More..