Moments of Truth is a Journey mapping term, and can occur outside of the digital experience. An example being opening the front door of a shop and walking in for the first very time.
According to Forrester Research, maintaining accurate and complete JavaScript tagging on your website is critical for ensuring the efficacy of marketing, analytics and personalization efforts.
A tag management system (TMS) allow you to manage and maintain tags from within a single application, enforcing workflow and process, leading to many distinctive benefits:
VENDOR | Note | M&A |
Ensighten | Ensigthen provides enterprise tag management solutions that enable businesses manage their websites more effectively. | Anametrix and TagMan |
Tealium | Tealium is a provider of tag management solutions, including analytics, advertising and affiliate enterprise websites | |
Signal | Signal is a cross-channel marketing company that provides cloud-based marketing technology for brand and digital agencies. | BrightTag and SiteTagger |
Tag Commander | Tag Commander is a tag management system based on a new technology combined with an ergonomic interface called Universal Tag Container | |
Relay42 | Relay24 is known for its cutting-edge technology, easy to use, flexibility and integration capabilities. | |
Qubit | Qubit provides a product suit that collects and process large data sets to identify and execute the biggest levers for improving online profitability, through machine learning, statistical analysis and high performance computing. |
The most prominent methods in developing software are Agile and Waterfall methodologies.
Your project is worked following the Waterfall methodology when the eight project phases (conception, initiation, analysis, design, construction, testing, implementation, and maintenance) have to be completed before the commencement of the development phase.
Pros
1. You know what to expect at the end of the project, in terms of sizing, costs and timeline.
2. Minimal impact in the event of high turnover thanks to the detailed documentation.
Cos
1. No changes are allowed when a task is completed, even if the initial requirements were faulty.
2. The testing phase is only done at the end, hence bugs are only discovered too late and a new code needs to be written from scratch.
3. The plan would not take into account the evolving clients’ need.
Agile is a framework for developing and sustaining complex products that follows an incremental rather than a sequential design process approach.
Development work focus on launching a Minimum Valuable Product – from a end-user point of view – then by enhancing the MVP incrementally by working on flexible modules. The work on these modules is done in weekly or monthly iterations, and at the end of each iteration, project priorities are evaluated and tests are run. These iterations allow for bugs to be discovered, and customer feedback to be incorporated into the design before the next sprint is run.
Pros
1. It allows for changes to be made after the initial planning, but prior to the commencement of the spring/iteration period.
2. It allows the user’s feedback to be incorporated in the process by modifying the features in the backlog accordingly.
3. The testing is performed at the end of each iteration ensuring bugs are captured and fixed within the development cycle.
Cons
1. You don’t know what to expect at the end of the project, in terms of sizing, costs and timeline.
Scrum and Kanban in software development are specific form of an agile software methodology.
Scrum is a framework that leverages team commitment as change agent, whilst kanban is a less structured model for introducing change through incremental improvements.
The first step in Agile estimation is the writing of the user stories, that are a short description of the functionality.
The user story is usually structured as: “As a <role>, I want <goal/desire> so that <benefit>“. It defines the main agent (the end user, the business user,…), their need and the benefit on producing that feature. Each story is firstly estimated by Agile team in terms of day of work unit or points.
Estimating is not an easy task, and to keep the process manageable:
Let’s start with two SEO myth: meta keywords are death and should not be used. Next, an heavy meta keywords optimized site is counterproductive for your ranking since the latest Google Penguin release. There are in fact rumors that also Bing used them to determine whether a site is a spam or a genuine site.
On the contrary, meta descriptions are crucial: they appear in the search results as the snippet description of the website/page. Each page should have its unique meta description: search engine consider duplicate tattle tags or meta description as bad form and can actually penalize the page.
If you have a responsive website, duplicate content displayed on across devices is a serious issue. Google recommend to add a canonical tags to all mobile optimized page, and an alternate tags to the respective web site pages. In this way, its search engine will easily determine and show the device-optimized page to the user client. You can also use canonical tags to discriminate yours from a partner’s content.
In recent time, rich snippets that enables the site to embellish their search results with add-ons, such as customer’s ratings have gained popularity. Although it is still unclear how they might affect your website ranking on search engine, recent studies prove that a user is more likely to engage with sites that have richest snippets.
Last but not least, you can claim Google authorship by linking you site with your Google + account.
GOGGLE | BING | |
Meta Description | 155 characters | 165 characters |
Meta Title | 62 Characters | 57 Characters |
NATIVE | HTML5 | HYBRID | |
APP FEATURES | |||
Design | For specific devices | No device-optimized | Good for the device it is running in |
Graphics | Native APIs | HTML, Canvas, SVG | HTML, Canvas, SVG |
Performance | Fast, reliable, responsive design | Slow | Slow |
Native Look and Fell | Native | Emulated | Emulated |
Distribution | AppStore Distribution | Web | AppStore Distribution |
Experience | Consistent with the platform look and fell | Browser based user experience | UI browser elements might not be aligned to native UI elements |
DEVICE BUILD-IN COMPONENTS | |||
Camera | Yes | No | Yes |
Notifications | Yes – Push Notification | No | Yes |
Contact, calendars | Yes | No | Yes |
Offline storage | Secure file storage | Shared SQL | Secure file storage, Shared SQL |
Geolocation | Yes | No | Yes |
GESTURES | |||
Swipe | Yes | Yes | Yes |
Pinch, spread | Yes | No | Yes |
Connectivity | Online and Offline | Mostly Online | Online and offline |
Development Skills | C, Java, .Net | HTML 5, CSS, Java Scripts | HTML 5, CSS, Java Scripts |
GO TO MARKET | |||
Launch | Slow time to market | Fast to market | Mediun time to market |
Update | Mediun time to market | Instant Update | Mediun time to market |
This post is created to provide a concise collection of resources on specific web application security topics.
OWASP Mobile Top 10 2016 Proposed List
M1 – Improper Platform Usage | This category covers misuse of a platform feature or failure to use platform security controls. It might include Android intents, platform permissions, misuse of Touch-ID, the Key-chain, or some other security control that is part of the mobile operating system. |
M2 – Insecure Data Storage | This new category is a combination of M2 + M4 from Mobile top 10 2014. This covers insecure data storage and unintended data leakage. |
M3 – Insecure Communications | This covers poor handshaking, incorrect SSL versions, weak negotiation, clear-text communication of sensitive assets, etc… |
M4 – Insecure Authentication | This category captures notions of authenticating the end user or bad session management. This can include: (1) Failing to identify the user at all when that should be required. (2) Failure to maintain the user’s identity when it is required. (3) Weakness in session management. |
M5 – Insufficient Cryptography | The code applies cryptography to a sensitive information asset. However, the cryptography is insufficient in some way. |
M6 – Insecure Authorization | This is a category to capture any failures in authorization (e.g. authorization decision in the client side, forces browsing, etc…). It is distinct from authentication issue (e.g. device enrollment, user identification, etc…). If the app does not authenticate the users at all in a situation where it should (e.g. granting anonymous access to some resources or services when authenticated and authored access is required) then that should be authenticated failure not authorization failure. |
M7 – Client Code Quality | This was the “the security decision via the untrusted inputs”, one of OWASP lesser used category. This would be the catch-all for code-level implementation problems in the mobile clients. |
M8 – Code Tampering | This category covers binary patching, local resource modification, method hooking, method swizzling, and dynamic memory modification. |
M9 – Reverse Engineering | This category includes analysis of the final code binary to determine its source code, libraries, algorithms, and other assets. |
M10 – Extraneous Functionality | Often, developers include hidden backdoor functionality or other internal development security controls that are not intended to be released into a production environment. |
ENISA – European Union Agency for Network and Information Security – IoT and smart infrastructures.
At the end of each usability testing, you will collect several types of data, and report the finding in a spreadsheet as in the example below, where each response is ranked based on the calculated KPIs: (1) Success rates; (2) Task time; (3) Error rates; and (4) Satisfaction.
In example, you can use the following scale to report your findings, where you assign a value of: (1) 0 for all failed task; (2) 1 for all task that are successfully completed within 1 to 2 minutes; (1) 2 for all task that are successfully completed in 1 minute.
ANALYZING THE USABILITY TESTING RESULTS |
|||||
Segment 1 | Segment 2 | Segment n | Avg. Response by Question | ||
Question 1 | Avg. Q1 | ||||
Question 2 | Avg. Q2 | ||||
Question n | Avg. Q4 | ||||
Average Response by Segment | Avg Seg. 1 | Avg Seg. 2 | Avg Seg. N |
As you are reviewing the data, it is extremely important to highlight and prioritize any detected usability issue. To help differentiate, you should note the severity of the problems on a scale points as defined below:
REPORTING SEVERITY LEVELS OF PROBLEMS | |
Critical | This would mean severe business and usability impact. This is a showstopper: the user is unable to complete the task. |
High | This would mean a significant business and usability impact, but not necessary a showstopper: some users will be frustrated if the issue is not fixed. It is not a showstopper if the affected user segment has no, or low value for the business |
Medium | This would mean a potential business and usability impact. More impact analyse is needed. Not a showstopper. |
Low | This would mean there is no an immediate business and usability impact. The users are annoyed, but this doesn’t keep them from completing the task |
Your report should include your recommendations and UX/Business/Technological impact. Below the minimum required points:
RECOMMANDATIONS |
UX IMPACT |
BUSINESS IMPACT |
DEVELOPMENT IMPACT |
FINAL RECOMMANDATIONS |
Provide user acknowledge form after submitting the form | H | L | Easy to fix | Fix this issue in the current release |
Redesign the Event section | L | H | Time consuming task, not in the spring budget | Fix this issue in the next available release |
A selection of templates to be used for your reporting can be found at:
www.usability.gov/how-to-and-tools/resources/templates.html