background preloader

Virtual machine

A virtual machine (VM) is a software-based emulation of a computer. Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer. Definitions[edit] A virtual machine (VM) is a software implementation of a machine (e.g., a computer) that executes programs like a physical machine. Virtual machines are separated into two major classifications, based on their use and degree of correspondence to any real machine: A VM was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine". System virtual machines[edit] System virtual machine advantages: multiple OS environments can co-exist on the same computer, in strong isolation from each otherthe virtual machine can provide an instruction set architecture (ISA) that is somewhat different from that of the real machineapplication provisioning, maintenance, high availability and disaster recovery[3] The main disadvantages of VMs are: Process virtual machines[edit]

Cloud computing Cloud computing metaphor: For a user, the network elements representing the provider-rendered services are invisible, as if obscured by a cloud. Cloud computing is a computing term or metaphor that evolved in the late 1990s, based on utility and consumption of computer resources. Cloud computing involves application systems which are executed within the cloud and operated through internet enabled devices. Purely cloud computing does not rely on the use of cloud storage as it will be removed upon users download action. Overview[edit] Cloud computing[3] relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network.[2] At the foundation of cloud computing is the broader concept of converged infrastructure and shared services. Cloud computing, or in simpler shorthand just "the cloud", also focuses on maximizing the effectiveness of the shared resources. History of cloud computing[edit] Origin of the term[edit]

Service-oriented architecture See also the client-server model, a progenitor concept A Service-Oriented Architecture (SOA) is a design pattern in which software/application components provide services to other software/application components via a protocol, typically over a network and in a loosely-coupled way. The principles of service-orientation are independent of any vendor, product or technology.[1] A service is a self-contained unit of functionality, such as retrieving an online bank statement.[2] By that definition, a service is a discretely invokable operation. However, in the Web Services Definition Language (WSDL), a service is an interface definition that may list several discrete services/operations. Services can be combined to provide the complete functionality of a large software application.[3] A SOA makes it easier for software components on computers connected over a network to cooperate. Definitions[edit] The Open Group's definition is: Overview[edit] SOA framework[edit] Design concept[edit]

Software as a service According to a Gartner Group estimate, SaaS sales in 2010 reached $10 billion, and were projected to increase to $12.1bn in 2011, up 20.7% from 2010.[6] Gartner Group estimates that SaaS revenue will be more than double its 2010 numbers by 2015 and reach a projected $21.3bn. Customer relationship management (CRM) continues to be the largest market for SaaS. SaaS revenue within the CRM market was forecast to reach $3.8bn in 2011, up from $3.2bn in 2010.[7] The term "software as a service" (SaaS) is considered to be part of the nomenclature of cloud computing, along with infrastructure as a service (IaaS), platform as a service (PaaS), desktop as a service (DaaS), backend as a service (BaaS), and information technology management as a service (ITMaaS). History[edit] Centralized hosting of business applications dates back to the 1960s. The expansion of the Internet during the 1990s brought about a new class of centralized computing, called Application Service Providers (ASP). Pricing[edit]

Velocity (software development) Velocity is a capacity planning tool sometimes used in Agile software development. Velocity tracking is the act of measuring said velocity. The velocity is calculated by counting the number of units of work completed in a certain interval, the length of which is determined at the start of the project.[1] The main idea behind velocity is to help teams estimate how much work they can complete in a given time period based on how quickly similar work was previously completed.[2] The following terminology is used in velocity tracking. Unit of work The unit chosen by the team to measure velocity. Interval The interval is the duration of each iteration in the software development process for which the velocity is measured. To calculate velocity, a team first has to determine how many units of work each task is worth and the length of each interval.

Test-driven development Software design using test cases Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases. This is as opposed to software being developed first and test cases created later. Software engineer Kent Beck, who is credited with having developed or "rediscovered"[1] the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.[2] Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999,[3] but more recently has created more general interest in its own right.[4] Programmers also apply the concept to improving and debugging legacy code developed with older techniques.[5] Test-driven development cycle[edit] The following sequence is based on the book Test-Driven Development by Example:[2] 1. 2. 3. 4. 5. Repeat

Extreme programming Planning and feedback loops in extreme programming. Extreme programming (XP) is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development,[1][2][3] it advocates frequent "releases" in short development cycles, which is intended to improve productivity and introduce checkpoints at which new customer requirements can be adopted. Critics have noted several potential drawbacks,[5] including problems with unstable requirements, no documented compromises of user conflicts, and a lack of an overall design specification or document. History[edit] Although extreme programming itself is relatively new, many of its practices have been around for some time; the methodology, after all, takes "best practices" to extreme levels. Origins[edit] The first time I was asked to lead a team, I asked them to do a little bit of the things I thought were sensible, like testing and reviews.

Agile Development Practices: How Anyone Can Boost Productivity With Scrum Popular Today in Business: All Popular Articles Ever heard of Scrum…or even agile development practices for that matter? If you’re a content marketing professional like me, a doctor or a firefighter, a teacher or an accountant, the answer is probably no. That’s because scrum isn’t a word you’re going to hear at cocktail parties (not fun ones anyway) unless you’re a software engineer or a rugby player. And, while I wouldn’t suggest dropping the scrum-bomb in casual conversation, this is one five-letter word you should not only add to your vocabulary, but also incorporate into your work. Very simply put, scrum is a series of agile development practices that software engineers use to help them work more efficiently. That’s why this week I’m interrupting my kick-ass content series (see my how-to posts on writing case studies, reports, business blogs, and press releases) to share some of the key take-aways that I gleaned from that training. Plan and Prioritize Scrum is all about organization.

Performance Appraisals in Agile Home » Articles, Knowledge 9 May 2012 No Comment Most of us do the exercise of rating team members every year even if we know that software is built by teams, not individuals. Moreoever, each individual needs to actively collaborate to produce quality software. This article proposes a 11 points process to assess individuals in an Agile way: Determine goals for every individual. Related Content: Our Development Process: 50 Months of Evolution Michael Dubakov, TargetProcess FounderMay, 2012 Context We create TargetProcess, quite large web application. Our development team is co-located. Observations Single Team → Mini-teams. Technology. Process Orange background - practice change caused significant problems.Green background - practice change led to nice improvements.Gray background - no changes in practice Iterations → Kanban. Points → No Estimates. Time tracking → No time tracking. Release Planning → None → Roadmaps. User Stories split. Definition of Done. Daily meeting. Meetings. User Experience. Craftsmanship. Here are the company focus changes by year: Development practices Single branch → Feature-branches → Single branch. Pair Programming. TDD → BDD. Some stats 5200 Unit Tests2500 Functional Tests (1700 Selenium and 800 JavaScript Tests)All tests are run on 38 Virtual MachinesShort circle build time (without package) is 40 minFull circle build time (with package) is 1 hour8400 Git commits in 2011 Wrap-up Questions?

Related: