Wednesday, July 23, 2014

Test Mobile App Performance with LoadRunner

Mobile users of today are becoming more & more demanding, and they expect mobile apps to perform just as fast as their desktop software. Just take a look at the following insight:
  • 71% mobile phone users expect a website to open as quickly as their desktop PC.
  • Web app users turnaround starts just after 3 seconds of response time.
  • 50% app users and 74% website users leave if response time is 5 seconds or more.
  • 1/3 of the unsatisfied users go to a competitors site and never come back.
Mobile-Performance-Testing

Performance Testing Tools for Mobile Apps

There are dozens of commercial and open source performance testing tools available for desktop browser based apps. But when it comes to mobile applications, the options are very limited especially for native mobile apps. There are a few open source tools for mobile web apps like JMeter, but not any for native mobile apps. Therefore, one has to turn to LoadRunner, Neo Load, Silk Performer etc. for native mobile apps performance testing.
Mobile Apps  Testing with LoadRunner
Mobile apps end to end performance testing is very challenging as it involves multiple devices and OS versions, app versions, and different servers for native and web applications etc. All these factors are not easy to address with a single tool. However, LoadRunner can tackle all the above mentioned challenges with minimal effort.
LoadRunner has different set of protocols for diversified applications. Following are the LoadRunner mobile protocols:
  • Mobile Applications – HTTP/HTML: This protocol records native and browser based mobile applications scripts (which use HTTP protocol for communicating with server) at transport level.
  • Mobile TruClient: This protocol is based on Ajax TruClient technology and records browser based mobile applications only.
Following table summarizes which protocol can be used for which type of mobile application:
Mobile-Performance-Testing-Table

These protocols are OS-agnostic and work perfectly on different versions of iOS, Android, Windows Mobile and Blackberry etc.

Script Recording with Mobile Protocols

Ajax TruClient: Script recording with Ajax TruClient protocol is very similar to standard web applications. Target mobile website should support Firefox and you can easily record user transactions on your preferred mobile device.
Mobile Application – HTTP/HTML: You need to select a recording method first in this protocol.  Following is the summary of different recording methods available in Mobile Application – HTTP/HTML protocol for native mobile applications script recording.
  • Proxy Recording: You can record mobile app script by configuring it to use VuGen Proxy, provided both are connected to same network and proxy configuration is allowed in application under test (AUT).
  • Server Side Recording: This method is used when you don’t want to record with actual device. Mobile app script can be recorded by installing VuGen’s Mobile Sniffer Agent on AUT server, if that device, server and VuGen machines are on the same network.
  • Script from Network Capture: You can simply use the Analyze Traffic feature to develop script from network capture file.
  • Device Emulator Recording: You can record the Android application script through an emulator with Emulator Recording feature.
  • On Device Recording: You can also record mobile app script with a mobile device by installing LoadRunner Mobile Recorder on your device. Later, you can open recoded file on VuGen machine to create the script.
Once you have recorded the script, all subsequent steps are almost the same as you performed with a standard desktop web application.

Wednesday, July 2, 2014

Phishing attacks: Measuring your Susceptibility

Phishing is a growing threat to organisations who have more to lose now than ever.


Phishing attacks are designed to deceive individuals into providing sensitive information such as passwords to a malicious third-party, or into performing actions such as downloading malware designed to give an attacker remote control over the victim’s computer. Worryingly, these attacks are becoming increasingly sophisticated, to the extent that often neither the individual nor the organisation to which they belong is even aware that an incident has occurred until it is too late.
Typically, these attacks take the form of an email that appears to come from a legitimate entity (for example, an online bank or email account), in order to gain the individual’s confidence, so that they then follow a link and divulge sensitive information. As an Information Security company, we have witnessed these types of breaches occurring ever more frequently, in line with the growth of online services, such as banking and social media.
Certainly the kind of information that it is now possible for attackers to intercept over the internet and company intranets makes these attacks very lucrative. Additionally, there is a low barrier to entry as phishing attacks such as this are relatively straightforward to implement and difficult to track and prevent.

Phishing: The unknown

If a phishing attack were launched against your organisation today, would your employees be susceptible?
Within many organisations, the susceptibility of employees to phishing attacks is largely unknown. Whilst security testing is now commonplace within organisations and the adoption of common security controls is widespread, there is not a widely-adopted approach to sustainably reducing the risks from phishing threats over the long-term. Whilst policies and processes are often in place to help an organisation react to a phishing attack, the effectiveness of any internal reaction to a legitimate attack is often unmeasured, especially if the occurrence of the attack itself has remained undiscovered.
The financial cost of phishing attacks to UK-based organisations, on the other hand, is well known.
In 2012, the UK economy lost £405.8m to phishing attacks, an increase of 25% over the £304.4m lost in 2011. RSA reported that in 2012 there were, on average, more than 37,000 unique phishing attacks globally each month, compared with 21,500 per month in 2011.
Phishing attacks against organisations are rising in both number and sophistication, and as the quantity, diversity and confidentiality of data stored electronically increases, so does the risk presented by the phishing threat.
The primary issues faced by organisations include how to measure organisational susceptibility to phishing attacks, and how sustainably to reduce the risk posed by such attacks, given that they are increasing in both frequency and sophistication.

Do you really know your security posture?

Whilst a growing number of organisations now have stringent security controls, policies and procedures in place and frequently perform security assessments, these assessments often do not provide any insight into the susceptibility of an organisation or its employees to phishing attacks. Instead, security assessments usually focus on more ‘tangible’ vulnerabilities, such as security flaws within software or the insecure misconfiguration of network infrastructure.
To gauge your current security posture in terms of the risk posed by phishing attacks, ask yourself the following questions:
  • As part of your regular security assessments, have you ever performed a controlled phishing attack?
  • Would you expect your employees to click on a malicious link within an email? Would they then go on to disclose authentication credentials or attempt to download a malicious payload?
  • How many employees in your organisation would you expect to perform those actions?
  • Which offices and departments within your organisation are most likely to be susceptible to a phishing attack?
  • Therefore, do you know where your security training budget is best spent for maximum impact and ‘quick wins’?
  • Have you ever run security awareness campaigns? If so, how effective do you think they were?
  • If there were a phishing attack, would there be an internal response, or would it go unnoticed?
  • Is the response guaranteed to go as per policy and procedure, or would a real world attack be likely to cause chaos and confusion?
  • If there were a response, would it be sufficient to mitigate the risk posed by the attack?
  • Is your organisation more or less susceptible to phishing attacks than other organisations within the same market sector?
If you were unable to answer any of the above questions, or if you answered any with uncertainty, then your organisation’s security posture could certainly be improved.
The susceptibility of an organisation, and as such the risk associated with phishing attacks, is widely considered to be difficult to measure. *
In some cases, phishing attacks, as an attack vector, are even overlooked entirely. In rare cases, where controlled phishing assessments are performed to measure risk, these are performed as one-time exercises and do not provide sufficient metrics to identify weak areas of an organisation. In these cases, the assessment does not have a sustained preventive effect: employees are still likely to click malicious links within emails only a few months after the engagement. Such engagements offer little to no value.

The risk posed by Phishing to your organisation

Executed well, a phishing attack can extract far more than domain credentials from your organisation. An attacker can use phishing attacks as a base to trick employees into downloading and running malicious software, in turn providing an attacker with a long-term, often undetected foothold inside the network, side stepping traditional security controls. Such a foothold is then often used to gain further access to corporate resources, such as file shares, from which assets can then be extracted.
A more determined attacker can go a stage further still. By enumerating the versions of client-side software, including the browser and plug-ins (in Java for example), as soon as an employee browses a malicious website after clicking a link in a phishing email, the attacker is able to identify and attempt to exploit any vulnerable client-side software accessible via the web-browser. If successful, the attacker would obtain a foothold within your network without the need even to prompt for the download of malicious software.
Once a foothold is obtained, an attacker can attempt to elevate their privilege level and begin to extract confidential data from the corporate network. Such data often includes financial information, such as payroll, client information or sales figures and projections. In many cases, it would also be possible for the attacker to modify data, thus affecting its integrity.
Ultimately, the real risk to a business from a successful phishing attack is loss of both money and reputation.

Measurement and mitigation of risk

The first stage of any plan to mitigate the risk posed to an organisation by phishing attacks is to measure the current level of susceptibility by performing a controlled attack against employees. Such an attack would ideally target a subset of employees from each department within the organisation. If appropriate, employees and departments from different offices should also be included within the test, in order to allow for the identification of any trends across the entire organisation. The data returned by such an assessment is invaluable in gauging current levels of susceptibility and providing information such as:
  • Number of users who clicked a malicious link within an email
  • Number of users who entered corporate domain credentials into a phishing website
  • Number of users who attempted to download a malicious executable
  • Breakdown of susceptible employees into various demographics, such as office, department or location
  • Activity over time (were users still clicking malicious links even after the internal security response?)
  • Use of weak passwords within corporate domain credentials
  • Did any employees reply directly to the phishing attack?
  • Comparison against the average susceptibility of other organisations in your market sector
Once a baseline has been established, strategies for mitigating risk should be investigated and implemented. There are a number of approaches that, when combined, are extremely effective in dramatically cutting the overall level of susceptibility:
  1. Perform regular, controlled phishing attacks to maintain a heightened awareness, thus reducing the likelihood of employees clicking suspicious links within emails. Such phishing attacks should use a different ‘scenario’ each time, in order to prevent any attack being instantly recognisable. When performed quarterly or bi-annually, such assessments train employees to be suspicious of all unexpected emails containing links to third-party websites. In addition, regular exercises of this kind provide constant analysis against the baseline assessment and will demonstrate any shift in susceptibility over time and allow for the tracking of company performance.
  2. Perform targeted training after assessments. Based on the data from each controlled phishing attack, look to identify trends in susceptibility within the organisation. It may be that your HR department was the most susceptible, or that employees within your London HQ were most likely to enter domain credentials into a third-party website. Use this data to target the most susceptible areas of the business with security training, in order to maximise the effectiveness of your training budget.
  3. Review the internal response after each assessment. Identify key areas of weakness that require improvement. Did the initial attack get spotted by the security team? If not, identify the reason for this and address it through the introduction/modification of policies and procedures. Investigate technical solutions to support the identification of attacks such as the implementation of IDSIPS or Email Monitoring Solutions. Generally, the efficiency, effectiveness and management of internal responses to phishing attacks and other threats will be enhanced with each assessment.

Controlled phishing attacks: What to expect

Generally, the advantages of regular controlled phishing attacks will be well understood within the technical areas of an organisation; however, there are various challenges that must be faced before such assessments are authorised and commissioned.
Often, the most significant hurdle is mitigating the risk of upsetting or embarrassing employees. Ensure that any employees who do click malicious links are not reprimanded or patronised, by ensuring that there is a strategy in place to explain the risks posed by phishing attacks and that formal training is provided where appropriate to help employees identify threats going forward.
Another issue is the fact that the assessment may have a detrimental effect on the corporate environment or network. Ensure that your supplier does not use any ‘payload’ for regular phishing assessments, i.e. employees’ attempts to download malicious software are recorded, but no malicious software is actually supplied.
Once regular assessments are commissioned, ensure that the key personnel within the organisation are aware of the assessment and know how to react, but do this on a need-to-know basis only. Generally, the heads of security and IT should be aware of the assessments, and should be prepared to intervene prior to any unnecessary actions being taken (such as replacing employee workstations).
For the first few controlled phishing attacks, expect large numbers of employees to be susceptible. It is not uncommon for 60-70% of employees targeted to click on the malicious links. Generally, there is a small drop-off (typically 5-10%) in employees who supply domain credentials and a further small drop-off (typically 2-4%) in those who then proceed to attempt to download a malicious executable.
In terms of internal response, anticipate some minor chaos for the first assessment. As security policies and procedures relevant to phishing attacks are tested for the first time, there are generally opportunities for improvement going forward. As long as procedures are in place to identify and document these opportunities, then progress can be made going forward, and, with each assessment, the internal response should become more efficient and streamlined. In the event of a real-world phishing attack, the internal response should have progressed to a stage where it is not only efficient but wholly effective.
From a return on investment perspective, the number of employees susceptible to phishing attacks can typically be expected to decrease by upwards of 25% per assessment, with most organisations seeing an overall susceptibility reduction of at least 90% after one year of quarterly controlled phishing assessments.

Summary

Despite being a long-established attack vector, phishing is a growing threat to organisations who, with the increasing amount of confidential data being stored electronically, have more to lose now than ever. It is common for organisations to struggle to measure their susceptibility to phishing attacks, with common security controls proving ineffective against the threat, and security assessments often overlooking phishing as a potential attack vector.
Regular phishing assessments performed in a structured, controlled manner provide a means to benchmark decreasing susceptibility over time. They can map out trends within your organisation, highlighting patterns in areas of the business that are most vulnerable. In addition to providing accurate metrics that allow the calculation of risk posed to your organisation, conducting quarterly or bi-annual phishing attacks helps to maintain a heightened awareness. This will decrease the risk posed to your organisation of a real-world attack, typically by upwards of 90%.

Denial of Service

With a growing number of DDoS attacks being observed across the internet, it is important to understand the risk they pose and the ways to defend against them.


A denial of service (DoS) attack is a malicious attempt to render a network, computer system or application unavailable to users. Most attacks of this nature utilise a large number of computers in what is known as a distributed denial of service (DDoS) attack. To deny service, an attack consumes more resources than the target network, computer system or application has available; in doing so, resources cannot be allocated to new connections and any services provided by the target become unavailable. Using a large number of computers simultaneously improves the efficiency and reliability of an attack.
Although DDoS attacks are not a direct threat to the security of sensitive information stored within an organisation, they can cripple critical systems whose availability is relied upon to conduct key business initiatives. The threat has become ever more concerning as governments and criminal organisations generate the resources and capabilities necessary to carry out sophisticated, multi-faceted denial of service attacks. This article aims to provide an overview of a number of common denial of service attack vectors. Hopefully you will gain an understanding of the way in which these attacks operate and are evolving, along with the challenges faced in defending targeted organisations.

Research points to an increase in both size and complexity

Akamai’s recent State of the Internet Report1 observes a 54% increase in denial of service attacks across their networks between the first and second quarters of 2013. The report highlights that, “There is a very real possibility this trend will continue”. Akamai also identify that ports 80 and 443, typically used to host web applications, have become the most popular ports for attackers to target. Arbor networks observed similar trends in their third quarter review2, in particular they note a “very rapid growth in the average attack size in 2013”. This is supported by the data graphed below showing the average increase in the size of DDoS attacks over the last four years. An interesting aspect of this graph is the rapid growth of volume in attacks seen this year, highlighting the rate at which malicious actors are increasing their DDoS capabilities.
ddos
What we are observing is an increase in both the size and complexity of attacks. Both of these traits must be considered if we are to develop effective defences in the modern threat landscape. On the one hand we must be able to mitigate the sheer amount of ingress traffic that will appear under a DDoS attack; on the other hand we must be able to distinguish legitimate influxes of traffic from malicious floods and apply effective filtering mechanisms.
DDoS attacks have traditionally focused on the consumption of network bandwidth along with the abuse of layer 4 protocols. UDPICMP and SYN floods are examples of DDoS attacks that use transport layer protocols. SYN floods are among the most commonly used traditional attacks and are of particular interest as they have been utilised by activist groups using tools such as Brobot and the Low Orbit Ion Cannon (LOIC). SYN floods exploit the behaviour of computer systems in their attempt to connect to one another using the TCP three-way-handshake.
The TCP protocol states that if a client wishes to connect to a server it must first send a packet known as a SYN request. The server should then respond with a SYN/ACK packet and wait for the client to acknowledge the connection with a final ACK packet. Whilst the server is waiting for the response, the connection remains in a half-open state typically for a period of 75 seconds. The half-open connection is maintained by the server in a finite memory space which, if exhausted, will drop further connection requests. During a DDoS SYN flood attack, SYN packets are sent from a number of computers distributed across the internet to a single target server initiating the first stage of a TCP three-way-handshake. Often each packet indicates responses should be sent to a spoofed random IP address. The server will respond by sending a SYN/ACKpacket to each IP address it believes is initiating a request; however, the final acknowledgement will never be returned. The target server is left with many connections in a half-open state as it is forced to handle many unresponsive connection requests.
By inundating the target with SYN requests it is very easy to exhaust the memory used to handle the connections, causing all subsequent requests to be dropped. Whilst SYN floods are very powerful and still relevant, their use is becoming less widespread as automated defence systems have been designed and are being implemented by organisations wishing to mitigate DDoS attacks against their networks. Current anti-DDoS solutions are effective at handling transport layer attacks. Akamai, for example, did not analyse SYN floods, UDP floods or other transport layer volumetric attacks, as they were automatically mitigated and absorbed by their systems.

Distributed Reflective Denial of Service

A new class of denial of service attacks known as Distributed Reflective Denial of Service (DrDoS) is increasing in popularity as malicious actors find ways to reflect and amplify traffic off misconfigured public servers across the internet. DDoS attacks can be amplified to dramatically increase the amount of traffic they can direct towards a target. Amplification techniques have evolved from using low level protocols such as ICMP to higher level protocols such as DNS. TheSMURF attack, for example, utilises ICMP to connect to misconfigured networks and broadcastICMP echo requests to every computer connected to that network. The source IP address defined in each echo request is spoofed to that of the target server, causing each computer on the vulnerable network to send an ICMP echo response to this address. In allowing broadcast requests to be forwarded onto its network, the edge router in this scenario is amplifying a single request by a factor of the number of computers on its internal network. Attacks have since been developed that operate in a similar fashion, although this family of attacks is again being defended against.
Layer 7 protocols are now being used to achieve traffic amplification. The DNS protocol is a perfect example of a layer 7 protocol being used in such a way. DNS requests operate over UDPand so do not require an underlying connection to be maintained. When a DNS Resolver receives a DNS request, it is processed and returned to the address given in the request. This address can be spoofed to that of a target server. As DNS requests are generally much smaller than their responses, a small amount of request traffic can generate a very large amount of response traffic. A DDoS attack can utilise this to reflect its traffic off misconfigured DNS Resolvers and onto target networks, having the effect of amplification in the process.
In the last few months, analysts have begun to see an increase in the amount of DDoS amplification attacks that utilise the CHARGEN protocol. CHARGEN is a UDP based protocol, meaning that, as with DNS based amplification, destination addresses can be easily spoofed. Interestingly, even though this obscure protocol is rarely used legitimately, there are estimated to be over 100,000 exploitable CHARGEN servers currently on the internet, and recent activity shows an increase in the number of CHARGEN based DrDoS attacks. CHARGEN listens on port 19 and, upon receiving a request, will simply return a random amount of data between 0 and 512 bytes in length. This functionality can be abused by sending requests with no data at all that tell the CHARGEN server to send its response to a target server. This exemplifies the need for network administrators to ensure un-used and outdated services are cleaned from their networks. In the last year alone, amplification attacks have increased by 265%. As you can see, modern denial of service attacks are relying less on exploiting transport layer protocols and more on opportunities at the application layer.

The attack eventually holds all available connections to the server

A search of the CVE vulnerability database returns over 12,000 publicly disclosed denial of service vulnerabilities using application layer protocols. Prolexic’s 2013 third quarter DDoS report3 highlights a 101% increase in the number of layer 7 exploits used in DDoS attacks compared with the same time last year. Using layer 7 attacks achieves greater obscurity as UDP and TCPconnections are used legitimately. Layer 7 attacks also require fewer connections and are therefore more efficient. With many bespoke applications being deployed within organisations, it is important to identify whether or not they can be exploited to achieve a denial of service. Web servers have been targeted by layer 7 attacks exploiting mechanisms in the handling of HTTPrequests. Recent versions of the Apache web server are vulnerable to attacks of this nature, in which a single computer can cause a denial of service. This kind of attack is very direct: it does not consume network bandwidth and so other services running on the target’s network will still be available. The attack uses fragmented requests to keep many connections simultaneously open, eventually holding all available connections to the server. The attacker sends only a partialHTTP request to the server; fragments of the remaining request are then sent incrementally to the server keeping the connection alive. As long as the full request is never completed, the connection will never be closed and made available to other users. The server only has enough available memory to be able to maintain a finite number of simultaneous connections.
This fact is exploited to consume all available connections and deny service to legitimate users. The attack has been understood for many years, but only recently became popular through the distribution of tools such as SlowLoris, which were used during the Iranian revolution to deny service to a number of government websites, whilst keeping traffic to a minimum so as not to disrupt Iranian networks as a whole. Attacks such as these can also be run through anonymising networks, masking the true identity of the traffic’s source.
It is evident that denial of service attacks are becoming more sophisticated. As mitigation techniques improve so too are the methods used to exploit them. When assessing the threat of denial of service attacks to an organisation, it is important to be aware of the latest exploits being used. Amplification techniques are only now beginning to be used to generate record breaking volumetric attacks. These techniques must be understood as they continue to evolve. As the denial of service attack surface continues to expand, we are tasked with constantly adjusting our approaches to mitigation. Organisations who rely on technology to maintain critical aspects of their business now understand that the threat that denial of service attacks pose is ever increasing. As denial of service attacks are often used in conjunction with more targeted attacks, their presence may also serve as an indication that the business as a whole is being targeted. If the availability of services is of paramount importance to the operation of your business, then denial of service remediation should be a key consideration in improving your company’s security posture.

The importance of identifying where an organisation is at risk of denial of service attacks and working towards remediation strategies is now being understood as a critical element in the ongoing race to defend key technological assets.

How hackers are stealing company secrets

 Research and whitepapers published on data exfiltration by advanced attackers


Recent research carried out by global information security firm MWR InfoSecurity, supported byCPNI (Centre for the Protection of National Infrastructure), has revealed current and new techniques being used by cyber criminals to steal sensitive information from companies. The papers also show what companies can do to protect themselves.
Amongst these techniques, researchers have found that it is possible to exfiltrate a large amount of information through a number of popular websites such as Facebook, Flickr, YouTube and LinkedIn.
Alex Fidgen, Director at MWR InfoSecurity, which is one of the small number of companies certified under the CESG/CPNI Cyber Incident Response Scheme, said: “There are two disturbing facts that every major organisation needs to accept. First, that it certainly possesses commercially sensitive information, such as intellectual property, intended acquisitions or resource development plans, which – if it fell into the wrong hands – could prove deeply damaging to the future of the enterprise. And secondly, that a sophisticated cyber attack targeting that data is almost certain to succeed.”
He added: “Modern organisations have networks that are complex and large. However, they often have few security controls in place, meaning that attackers encounter few barriers to stop them and are able to sidestep or compromise the few controls they do encounter. Once inside the network, attackers will move between computers, hunting the information they seek and then exfiltrating that data back to themselves.”
MWR works with companies that are under constant threat or have been compromised, and has both skilled (white hat) attackers and defenders with experience in understanding the methods and strategies of advanced attackers. The company identified a number of methods currently being used to steal sensitive data.
MWR researcher and lead author of the whitepapers Dr David Chismon said: “As there are few restrictions, attackers typically transfer files the same way any technical user would do. Many use the connections they have set up for command and control. HTTP and HTTPS (web traffic) are highly common and the File Transfer Protocol (FTP) is often used as well.
“Others use emails, employing simple techniques like setting up an email forwarding rule for the target so any email they receive is copied to the attacker. Others are increasingly using cloud storage such as Google Drive and Microsoft OneDrive. Interestingly, attackers have been seen deploying tools to use cloud storage, but not using them as there are other options available to them.”
He added: “If organisations block access to websites to prevent attackers, they can use popular websites that are likely to be permitted as vectors to exfiltrate data. In an experiment we carried out it was possible to exfiltrate 1TB of data via Flickr in 200mb chunks (see video). It was also possible to exfiltrate 20Gb via YouTube in a single chunk, and smaller amounts via popular websites such as Facebook and Tumblr.
“Increasing use of mobile devices, remote working and VPNs (Virtual Private Networks) will present new opportunities for attackers, who are using more covert methods to exfiltrate the data, such as hiding it as other data types.”
MWR extrapolated business and technology trends as well as techniques attackers are just beginning to use, and identified new methods that may be used to steal data in the future.
Dr Chismon said: “Attackers, who are often state sponsored, are already being seen using forensics tools and methods to both find information they otherwise wouldn’t and to better hide the data they are stealing. This is likely to become more common.”
“Cloud storage and email services are likely to be the predominant method in the future. Connections are encrypted and the services will be used normally by employees, making it hard for investigators to find the malicious connections and it obscures the final destination of the data.”
He added: “As more organisations use cloud services for business functions and remote work, attackers can compromise passwords for these services and get the data directly from there rather than needing to obtain it from the organisation’s network.”
Modern networks are becoming increasingly complex, meaning that there will always be routes that an attacker can take to access sensitive data. In the whitepapers, MWR details what organisations can do to better protect themselves.
Dr Chismon commented: “Sadly, there is no magic bullet that can prevent attackers from obtaining data. To stand the best chance of detecting and deterring advanced attackers, organisations need to force them through controlled routes. They then need to increase the number of actions attackers would have to take to access the data and finally, develop and hone their ability to detect suspicious actions or movements to effectively investigate alleged breaches.

Wednesday, June 25, 2014

Google announces Android L developer preview


In what is a change to its usual manner of handling new Android releases, Google has announced a developer preview of the upcoming Android L release. Previous the search giant would unveil a new version of its platform at the I/O event, making it available for download almost immediately.


Now, we've got a developer preview that will only serve for developers to play with and optimize their apps with the actual public release coming later on.
The Android L (final name and version number yet to be confirmed) brings various changes to the UI, with refreshed status bar, dialer and just about every other system apps. Google has also redesigned the transition animations so they look cooler and more natural.
The notifications have been enhanced and are now available on the lockscreen. You can either tap a notification from there and be taken to the app responsible for it, or you can swipe it right off.
The Chrome browser, which has been the default Android browser for some time now has been upgraded as well. It offers a new fluid design with the different parts of its UI changing size to give you easier access to the most relevant options. Its performance has also been tweaked and the GUI rendering has been fixed at 60fps making scrolling around appear extra smooth.
   
The new runtime environment in the L release is ART, launched as an alternative to Dalvik in KitKat. ART allows apps to run faster and is compatible with ARM, x86 and MIPS architectures. In addition to performance gains, it also provides better memory benefits and is supports 64-bit.
   
Performance isn't the only thing Google wants to improve with the L Release. The battery performance is also important and to make it better, Google introduced Project Volta.
There's a new Battery Saver mode, which can tune down the CPU, turn off the phone's radio and as a result extend its power autonomy. On the Nexus 5, for example, this should earns you 90 minutes additional usage time.
   
Another major change Google introduced is a separation between personal and work data. No modification to existing apps is needed, Android will keep the data separate and secure. Company IT admins will be able to bulk deploy apps to employees.
Samsung contributed a lot of what it developed with KNOX but the feature will work on devices by any manufacturer. Best of all you don't need Android L, this feature will be brought to any Android 4.0 Ice Cream Sandwich and above.


Android L also unifies the fitness tracker experience into one app, Google Fit (hi, Apple HealthKit). It will pull data from sensors in your phone, your smartwatch and other wearables. Nike Fuelband is one, but Adidas, Motorola, LG, Bais, Polar, RunKeeper, HTC and even Intel will be bringing supported devices.
Google Fit will track steps, your sleep and other health metrics. Apps will be able to request access to this data, but it's up to the user to allow it.


The factory images for the Android L release on the Nexus 5 and Nexus 7 will be released tomorrow for developers to play with them. Over the air updates for end-users will arrive in the Fall.



GOOGLE ANNOUNCES ANDROID L DEVELOPER PREVIEW - READER COMMENTS

  • Duel
Wow that was boring presentation, basicly they didnt bring anything else than new Ui And this news been here 2 hours and yet under 20 comments, it looks like this was very boring stuff, well it definately was. Though first time android actually ...
  • Reply
  • 2014-06-25 20:13
  • mhP{
  • ECHLN
$100 that it's Android Limburger
  • Reply
  • 2014-06-25 20:09
  • Nnu1
  • sid
No mention of nexus 4.. Hope we r not left out :)
  • Reply
  • 2014-06-25 19:54
  • vbR4